Skip to main content
  • noneedit
  • www.ijcseonline.org
    International Journal of Computer Sciences and Engineering (ISSN: 2347-2693)edit
— Diabetic Retinopathy (DR) also known as diabetic eye disease. It is the damage occurs to the retina due to diabetes. It can eventually lead to blindness. So the early detection of disease is needed, Manual detection is time consuming... more
— Diabetic Retinopathy (DR) also known as diabetic eye disease. It is the damage occurs to the retina due to diabetes. It can eventually lead to blindness. So the early detection of disease is needed, Manual detection is time consuming and often make observation error. Hence several computer-aided systems are introduced and which would make fast and consistent diagnosis-aid useful for biomedical and health informatics field. The Diabetic retinopathy detection methods that uses machine learning techniques. In one system classifiers such as the Gaussian Mixture model (GMM), k-nearest neighbor (kNN), support vector machine (SVM) are used and another system that uses GMM, kNN, SVM, and combinational classifiers are used for classifying retinal fundus images.
Research Interests:
There are various methods of handling Optimal Binary search trees in order to improve the performance. One of the methods is Dynamic programming which incurs O(n 3) time complexity to store involved computations in a table. The data... more
There are various methods of handling Optimal Binary search trees in order to improve the performance. One of the methods is Dynamic programming which incurs O(n 3) time complexity to store involved computations in a table. The data mining technique called Data Preprocessing is used in order to remove noise early in the dataset and enhances consistency of the given data. The post dynamic computing is applied using knowledge of dynamic programming principle which starts with only required data and computes only the necessary attributes required to construct Optimal Binary Search Tree with time complexity O(n) if there are n identifiers / integers / any complex objects. This approach avoids computing all necessary table attributes. Hence, the complexity or cost of post dynamic computing using Dynamic Programming is proven to be less than O(n 3) or even less than specified in some cases with experimental results.
Research Interests:
Cloud computing, an internet based technology which provides virtualized computer resources over the internet. Distribution of the dynamic workload among the computer resources in the cloud evenly in such a way that no single node is... more
Cloud computing, an internet based technology which provides virtualized computer resources over the internet. Distribution of the dynamic workload among the computer resources in the cloud evenly in such a way that no single node is overloaded or under loaded is called Load Balancing. An efficient load balancer will increase the performance of cloud, maximizes the cloud services and also increases the resource utilization. Today increasing the performance of a cloud depends on many factors, among them Load Balancing is one of the main factors. In this paper we propose a load balancing algorithm which is a variant to the Weight Least Connection (WLC) algorithm. The proposed algorithm shows better results in several aspects like accurate calculation of work load on a resource, distributing the work load on the service nodes efficiently, enhancing the response time and minimizing the overall task execution time.
Research Interests:
— This paper proposes a novel Universal Steganalysis framework which can be applied for spatial domain and JPEG Domain Steganography algorithms. The objective is to develop a steganalysis algorithm which has to identify any distribution... more
— This paper proposes a novel Universal Steganalysis framework which can be applied for spatial domain and JPEG Domain Steganography algorithms. The objective is to develop a steganalysis algorithm which has to identify any distribution (uniform or non-uniform) of stego-payloads. The framework proposed uses a 3-way tensor model to extract the image features which is important for estimating the embedded change irrespective of domain. To obtain the accurate results and to analyze the error, 360 degree bit change estimation is done. The experimental results evaluated on 3000 images which shows a good detection rate in both domains and a reasonable false acceptance rate and false rejection rate based on the pay load when tested with most of the steganography algorithms.
Research Interests:
—The data leak detection plays a major role in organizational industry. The data leak poses serious threat to online social Medias, sensitive datas and so on. We take two papers for this survey. Both of them belong to the area of... more
—The data leak detection plays a major role in organizational industry. The data leak poses serious threat to online social Medias, sensitive datas and so on. We take two papers for this survey. Both of them belong to the area of information forensic and security. In first survey, paper develops a model [PPDLD] the model is based on fuzzy finger print method. The goal of this paper is to generate special type of digest is called fuzzy fingerprint. Rabin finger print algorithm is introduced here just for sampling. A filtering method is used during the digestion process. In second survey, paper deals the fast detection of transformed data leak. This paper suggests a preserving method based on alignment algorithm. The paper aim to detect long and inexact leak patterns from sensitive data and network. Detection is based on comparable sampling algorithm.
Research Interests:
— Language is a hallmark of intelligence, and endowing computers with the ability to analyze and generate language as a field of research is known as Natural Language Processing (NLP)-has been the dream of Artificial Intelligence.... more
— Language is a hallmark of intelligence, and endowing computers with the ability to analyze and generate language as a field of research is known as Natural Language Processing (NLP)-has been the dream of Artificial Intelligence. Software requirements are typically captured in natural languages (NL) such as English and then analyzed by software engineers to generate a formal software design/model. However, English is syntactically ambiguous and semantically inconsistent. Hence, English specifications of software requirements cannot only result in erroneous and absurd software designs and implementations but, the informal nature of English is also a main obstacle in machine processing of English complex specification of the software requirements. To tackle this key dispute, there is need to introduce a controlled NL representation for software requirements, to generate perfect and consistent software models. Proposed framework aims to model complex software requirements expressed in natural language and represent them with a new methodology that captures the natural language understanding(NLU) of events and models them using Stochastic Petri Nets (SPN) instead of only intermediate graph based structure using techniques of Natural Language Processing (NLP), this helps in removing ambiguity and corrects interpretation of requirements. To eliminate ambiguity, work combines all the different meanings (SPN graphs) of each ambiguous sentence into colored SPN graph. SPNs are state machines that help us to visualize better, the combined SPN graph. It can also represent knowledge about the requirement, which can be used to derive test case in early development phase. Hence aim of proposed work is twofold that overcomes the problem of ambiguity and knowledge representation. Stakeholder's document is input to framework, pre-processed by some pre-filter with certain functionality to improve the parsing. This parsed output gets converted into simple graph which in turn is converted into SPN graph with color representation to improve ambiguity. Pre-filter may be designed with self-learning capabilities to perk up output without human involvement.
Research Interests:
Thyroid disease is one of the common diseases to be found in human beings. The disease of thyroid gland varies from the low production as well as high production of the thyroid hormone, respectively. However, it is always recommended to... more
Thyroid disease is one of the common diseases to be found in human beings. The disease of thyroid gland varies from the low production as well as high production of the thyroid hormone, respectively. However, it is always recommended to diagnose the disease at an earlier stage in order to prevent further harmful effects and to provide the treatment to keep the thyroid hormone at normal level. Data Mining is playing vital role in health care applications. It is used to analyze the large volumes of data. One of the important task in data mining is predicting disease in earlier stage, which assist physician to give better treatment to the patients. Classification is one of the most significant data mining technique. It is supervised learning and used to classify predefined data sets. Data mining technique is mainly used in healthcare organizations for decision making, diagnosing diseases and giving better treatment to the patients. The data set used for this study on hypothyroid is taken from University of California Irvine (UCI) data repository. The entire research work is to be carried out with Waikato Environment in Knowledge Analysis (WEKA) open source software under Windows 7 environment. An experimental study is to be carried out using data mining techniques such as J48 and Decision stump tree. The data records are classified as negative, compensated, primary and secondary hypothyroid. As a result, the performance will be evaluated for both classification techniques and their accuracy will be compared through confusion matrix. It has been concluded that J48 gives better accuracy than the decision stump tree technique.
Research Interests:
There are various pattern matching algorithms which take more comparisons in finding a given pattern in the text and are static and restrictive. In order to search pattern or substring of a pattern in the text with less number of... more
There are various pattern matching algorithms which take more comparisons in finding a given pattern in the text and are static and restrictive. In order to search pattern or substring of a pattern in the text with less number of comparisons, a general data mining technique is used called data preprocessing which named as D-PM using DP with help of one time look indexing method. The D-PM using DP finds given pattern or substring of given pattern in the text in less time and the time complexity involved is less than existing pattern matching algorithms. The new Pattern Matching Algorithm with data preprocessing (D-PM using DP) proposes Pattern Matching with dynamic search behavior and makes users should have flexibility in searching.
Research Interests:
—Constraint based pattern mining and association rules are used in many applications like genetic sequence analysis, in finance for bankrupting prediction, in securities for fraud detection, in agriculture for discovering classification... more
—Constraint based pattern mining and association rules are used in many applications like genetic sequence analysis, in finance for bankrupting prediction, in securities for fraud detection, in agriculture for discovering classification of plants etc. to get the user interesting knowledge. Constraints are useful to eliminate unwanted rules and also solves rule explosion problem. Many algorithms are proposed for constraint based pattern mining and association rule generation. These constraints are in the form of attribute, item length, time or duration, regular expression etc. Pushing constraints in a mining process gives user interesting discovery. Literature survey shows that performance of an algorithm improves with application of constraint during the mining process. The paper elaborates about the literature survey on use of constraints in generation of association rules with different categories of constraints with its properties.
Research Interests:
— User authentication is a crucial service in wireless sensor networks (WSNs) because wireless sensor nodes are typically deployed in an unattended environment, leaving them open to possible hostile network attack. The main goal of... more
— User authentication is a crucial service in wireless sensor networks (WSNs) because wireless sensor nodes are typically deployed in an unattended environment, leaving them open to possible hostile network attack. The main goal of research is to Authenticate remote user in a convenient and Secured manner. In this paper, we propose a ABC(Artificial Bee Colony optimization algorithm for matching)algorithm for user authentication in hierarchical wireless sensor networks using Biometric (finger print)data. In the proposed scheme ABC algorithm calculates the standard deviation(threshold value) from the biometric data (finger print) which is used for user authentication with maximum fitness in an optimized and secured manner.
Research Interests:
Generally, huge data of any organization possess data redundancy, noise and data inconsistency. To eliminate, Data preprocessing should be performed on raw data, then sorting technique is applied on it. Data preprocessing includes many... more
Generally, huge data of any organization possess data redundancy, noise and data inconsistency. To eliminate, Data preprocessing should be performed on raw data, then sorting technique is applied on it. Data preprocessing includes many methods such as data cleaning, data integration, data transformation and data reduction. Depending on the complexity of given data, these methods are taken and applied on raw data in order to produce quality of data. Then, external sorting is applied. The proposed external sorting now takes the number of passes less than actual passes log B (N/M) + 1 for the traditional B – way external merge sorting. Also, the number of Input / Outputs of proposed method is less than 2*N* (log B (N/M) + 1) of Input / Outputs than traditional method, and also proposed method consume least number of runs compared to actual basic external sorting.
Research Interests:
— Image segmentation is very important application in a biomedical diagnosis use image data analysis. In medical analysis the accuracy of image segmentation has a critical clinical requirement for the localization of body organs or... more
— Image segmentation is very important application in a biomedical diagnosis use image data analysis. In medical analysis the accuracy of image segmentation has a critical clinical requirement for the localization of body organs or pathologies in order to raise the quality of prediction of disease or infections. This paper covers review that includes several articles in which latest A.I biomedical image segmentation techniques are applied to different imaging color space models. This review article describes how various computer assisted diagnosis system works for achieving the goal of finding abnormal segments of body organs in biomedical images of the MRI, ultrasound etc. It has been observed that those segmentation approach are broadly giving accurate results in which the segmentation of the images is performed by defining an active shape model and then localization of potential area of interest using thresholding.
Research Interests:
— Security-typed programming languages aim to track insecure information flows in application program. This is achieved by extending data types with security labels in order to identify the confidentiality and integrity policies for each... more
— Security-typed programming languages aim to track insecure information flows in application program. This is achieved by extending data types with security labels in order to identify the confidentiality and integrity policies for each data element. Such policies specify which principals or entities are allowed to read from or write to the value of data respectively. In this paper, we evaluate the run-time overhead of dynamic information flow (DIF) analysis in security typed programming languages. Such analysis is performed by including the security labeling in the dynamic operational semantics. Our evaluation mechanism relies on developing two different language implementations for a simple while programming language that has been considered as a case of study. The first one is a traditional interpreter that implements the ordinary operational semantics of the language without security labeling of data types and hence performs no information flow analysis. The second one is an interpreter that performs a dynamic information flow analysis by implementing the security labeling semantics (where language data types are augmented with security labels). Next, two execution times of a program executed using both interpreters are measured (i.e., one execution time for each interpreter). The resulting difference in execution time represents the absolute run-time overhead of dynamic information flow analysis. We have calculated the difference in execution time for some benchmark programs that are executed using both implementations.
Research Interests:
— Security-typed programming languages aim to track insecure information flows in application program. This is achieved by extending data types with security labels in order to identify the confidentiality and integrity policies for each... more
— Security-typed programming languages aim to track insecure information flows in application program. This is achieved by extending data types with security labels in order to identify the confidentiality and integrity policies for each data element. Such policies specify which principals or entities are allowed to read from or write to the value of data respectively. In this paper, we evaluate the run-time overhead of dynamic information flow (DIF) analysis in security typed programming languages. Such analysis is performed by including the security labeling in the dynamic operational semantics. Our evaluation mechanism relies on developing two different language implementations for a simple while programming language that has been considered as a case of study. The first one is a traditional interpreter that implements the ordinary operational semantics of the language without security labeling of data types and hence performs no information flow analysis. The second one is an interpreter that performs a dynamic information flow analysis by implementing the security labeling semantics (where language data types are augmented with security labels). Next, two execution times of a program executed using both interpreters are measured (i.e., one execution time for each interpreter). The resulting difference in execution time represents the absolute run-time overhead of dynamic information flow analysis. We have calculated the difference in execution time for some benchmark programs that are executed using both implementations.
Research Interests:
— In this system an approach to clone analysis and Vulnerability detection for Web applications has been proposed together with a prototype implementation for web pages. Our approach analyzes the page structure, implemented by specific... more
— In this system an approach to clone analysis and Vulnerability detection for Web applications has been proposed together with a prototype implementation for web pages. Our approach analyzes the page structure, implemented by specific sequences of HTML tags, and the content displayed for both dynamic and static pages. Moreover, for a pair of web pages we also consider the similarity degree of their java source. The similarity degree can be adapted and tuned in a simple way for different web applications. We have reported the results of applying our approach and tool in a case study. The results have confirmed that the lack of analysis and design of the Web application has effect on the duplication of the pages. In particular, these results allowed us to identify some common features for the web pages that could be integrated, by deleting the duplications and code clones. Moreover, the clone analysis and Vulnerability detection of the pages enabled to acquire information to improve the general quality and conceptual/design of the database of the web application. Indeed, we plan to exploit the results of the code clone analysis method to support web application reengineering activities.
Research Interests:
—Communication security is an essential and progressively difficult issue in wireless networks. Physical-layer approach to secret key generation which is each quick and independent of channel variations is taken into account. This... more
—Communication security is an essential and progressively difficult issue in wireless networks. Physical-layer approach to secret key generation which is each quick and independent of channel variations is taken into account. This approach makes a receiver jam the signal during a manner that also permits it to decrypt the info, nevertheless prevents alternative nodes from cryptography. Another well-known approach for achieving information-theoretic secrecy depends on deploying artificial noises to blind the intruders' interception within the physical layer. A multiple inter-symbol obfuscation (MIO) theme is planned, that utilizes a collection of artificial vociferous symbols to alter the initial knowledge symbols within the physical layer. MIO will effectively enhance the wireless communications security.
Research Interests:
— This paper presents a cost effective product to automatically monitor and detect outages in villages who don't have a reliable and intermittent supply of electricity. This product might prevent malpractices and corruption that linemen... more
— This paper presents a cost effective product to automatically monitor and detect outages in villages who don't have a reliable and intermittent supply of electricity. This product might prevent malpractices and corruption that linemen do by avoiding outage complaints and delaying the whole process, and hence, make the electric supply trustworthy.
Research Interests:
— Clones are the piece of Software, which is creating from the copy of the original software. To be more specific, the idea behind software cloning is to create a new software that replicates the aspect and usefulness of the original... more
— Clones are the piece of Software, which is creating from the copy of the original software. To be more specific, the idea behind software cloning is to create a new software that replicates the aspect and usefulness of the original software in possible. It is important to understand that cloning does not have to involve any source code in the original software. Software Cloning typically occurs in the source code for the original software is not available. In a result, software cloning does not imply source code copying. Since software cloning goes way beyond simply executing a similar user interface. The goal in cloning is to create a new software program that mimics everything the original software does and the way in which it does.
Research Interests:
—Techniques including minimal path can efficiently extract curve-like structures by optimally finding the integral minimal-cost path between two seed points. In the first method, a novel minimal path-based algorithm which works on more... more
—Techniques including minimal path can efficiently extract curve-like structures by optimally finding the integral minimal-cost path between two seed points. In the first method, a novel minimal path-based algorithm which works on more general curve structures with fewer demands on the user for initial input compared to prior algorithms based on minimal paths. The main novelties and benefits of this new approach are that it may be used to find both closed and open curves, including complex topologies containing both multiple branch points and multiple closed cycles without demanding pre-knowledge about which of these types is to be extracted, and it requires only one input point which, in contrast to older methods, is no longer constrained to be an endpoint of the desired curve but truly may be any point along the desired curve. The second method MPP-BT (Minimal Path Propagation with Backtracking) first applies a minimal path propagation from one single starting point and then, at each reached point,backtracks few steps back to the starting point. Researchers in different areas like geometric optics, computer vision, robotics, and wire routing have previously solved related minimum-cost path problems using graph search and dynamic programming principles.
Research Interests:
— The wireless sensor networks are the type of network in which sensor nodes sense the environmental conditions and pass the sensed information to base station. The sensor network is deployed on the far places like forests, deserts etc.... more
— The wireless sensor networks are the type of network in which sensor nodes sense the environmental conditions and pass the sensed information to base station. The sensor network is deployed on the far places like forests, deserts etc. The size of the sensor node is very small due to which it is very difficult to recharge or replace battery of these sensor nodes. The various techniques has been proposed in the previous times, to reduce energy consumption of the network. Among various proposed techniques, clustering is the efficient technique to reduce energy consumption of the sensor networks. The clustering is of two types dynamic and static and in this article techniques of both type of clustering is reviewed and compared in terms of various parameters.
Research Interests:
— Image denoising has become a very essential in the case of noisy images for better information extraction. On the other hand, processed image must reserve the relevant details of the primary image. This noise suppression is very useful... more
— Image denoising has become a very essential in the case of noisy images for better information extraction. On the other hand, processed image must reserve the relevant details of the primary image. This noise suppression is very useful in many applications. Speckle noise is one of the major noises causing digital hologram. So we need some mechanism for denoising the noisy content by preserving the valuable information. This paper presents a comparative study on BEMD (bi-dimensional empirical mode decomposition) and MBEMD (multilevel bi-dimensional empirical mode decomposition) along with the frost filter.
Research Interests:
— Cyber Security is one of the key elements of any system. Breaching of cyber security can lead to loss of confidential and private data. To prevent the attacks on network an Intrusion Detection System Using Hybrid Classification... more
— Cyber Security is one of the key elements of any system. Breaching of cyber security can lead to loss of confidential and private data. To prevent the attacks on network an Intrusion Detection System Using Hybrid Classification Technique is proposed. This IDS uses a decision tree algorithm to classify the known attack types in the dataset and SVM is used to classify the normal data from the dataset, there by detecting the unknown attacks. Dataset used is the NSL-KDD Dataset.
Research Interests:
—This paper presents an approach to automatic detection of liver tumor in CT images by using region-growing and Support Vector Machine (SVM) which is successfully classifies the liver cancer types such as hepatoma, hemangioma and... more
—This paper presents an approach to automatic detection of liver tumor in CT images by using region-growing and Support Vector Machine (SVM) which is successfully classifies the liver cancer types such as hepatoma, hemangioma and carcinoma.The method rectifies the problem of manual segmentation and classification which is time consuming due to the variance in the characteristics of CT images.Our proposed method has been tested on a group of CT images obtained from hospitals in Kerala with a promising results both in liver and tumor segmentation. The average error rate and accuracy rate obtained from our proposed method is 0.02 and 0.9.
Research Interests:
— The concept of cooperative communication is one of the fastest growing areas of research in wireless sensor networks. Energy efficiency is the main issue in the Wireless Sensor Networks (WSN). The energy consumption is minimized using... more
— The concept of cooperative communication is one of the fastest growing areas of research in wireless sensor networks. Energy efficiency is the main issue in the Wireless Sensor Networks (WSN). The energy consumption is minimized using cooperative communication technique. Number of techniques has been proposed in minimizing the energy consumption in wireless sensor networks. In this paper, the two techniques proposed for minimizing the energy consumption have been discussed. During the route discovery process AODV (Ad hoc On demand distance vector) floods the entire network with large number of control packets, and hence it finds many unused routes between the source and destination. This becomes a major drawback to AODV since this causes routing overhead, consuming bandwidth and node power. The proposed enhancement to AODV optimizes CAODV (Cluster Based AODV) by reducing the number of control messages generated during the route discovery process. The optimization method uses the idea of clustering the nodes of the network and managing routing by cluster heads and gateway nodes. Routing using clusters effectively reduces the control messaged flooded during the route discovery process by replacing broadcasting of RREQ packets with forwarding of RREQ packets to Cluster Heads. The performance evaluation of CAODV is carried out through simulation tests, which evince the effectiveness of this protocol in terms of network energy efficiency when compared against other well-known protocols.
Research Interests:
— This paper aims at circular chess on the new mode of playing board games by having an automated physical platform. Hence it discusses the development of an automatic chess board called as Chess. Which enables the user to play the game... more
— This paper aims at circular chess on the new mode of playing board games by having an automated physical platform. Hence it discusses the development of an automatic chess board called as Chess. Which enables the user to play the game of chess in different formats with the opponents moves completely based user. It uses various types in graph sets the domination game played on a graph G consists of two players, Dominator and Staller, who alternate taking turns choosing a vertex of G such that whenever a vertex is chosen by either player, at least one additional vertex is dominated. Dominator wishes to dominate the graph in as few steps as possible and Staller wishes to delay the process The game domination number Gamma (G) is the number of vertices chosen when Dominator starts the game and the Staller as much as possible. Sushisen algorithms using game domination number Gamma ' (G) when Staller starts the game. An imagination strategy is developed as a general Matlab tool for proving results in the domination game.
Research Interests:
— Real time object tracking is a perplexing task in computer vision. Many algorithms exist in literature like Mean shift, background-weighted histogram (BWH) and Corrected background-weighted histogram(CBWH) for tracking the moving... more
— Real time object tracking is a perplexing task in computer vision. Many algorithms exist in literature like Mean shift, background-weighted histogram (BWH) and Corrected background-weighted histogram(CBWH) for tracking the moving objects in a video sequence.This paper attempts to do the comparative analysis of the three methods in terms of performance parameters like Normalised Centroid Distance , Overlap and number of iterations using two types of features i.e., color histogram and color texture histogram. Experimental results show that the performance of CBWH gives better performance when compared with basic Mean shift and BWH.
Research Interests:
— Big Data can bring big benefits for all sectors of our life via smarter moves, for examples, by analyzing huge dataset immediately and allowing for making decisions based on what they have learned, by gauging customer needs immediately... more
— Big Data can bring big benefits for all sectors of our life via smarter moves, for examples, by analyzing huge dataset immediately and allowing for making decisions based on what they have learned, by gauging customer needs immediately and analyzing customer satisfaction in a timely manner, or by providing many diagnosis or treatment options quickly. These can driving business and economy growth. Until recently, it was hard to get benefits of Big Data without heavy infrastructure investments; for that, the enterprises suffered from many challenges which related to the lack of capacity to process and store the huge dataset adequately, and inability to manage and extract value from these huge dataset; but times have changed. The technology of cloud computing was evolved rapidly to bridge the storage and processing gap and opened up a lot of options for using Big Data by both individuals and organizations without having to invest in massive on-site storage and data processing facilities. This paper presents the concept, advantages, characteristics, processing and applications of Big Data. Then proposes a model to integrate Big Data and cloud computing technology based on three basic cloud service layers to present a new model of Big Data as a Service (BDaaS). The proposed BDaaS model allows enterprise to implement various Big Data functions using variety outsourcing (like Hadoop, Altiscale and Qubole) clearly, easily and moving them out of the expensive whirlpool of updating and maintaining their infrastructure.
Research Interests:
— The mining high utility pattern is new development in area of data mining. Problem of mining utility pattern with itemset share framework is tricky one as no anti-monotonicity property with interesting measure. Former works on this... more
— The mining high utility pattern is new development in area of data mining. Problem of mining utility pattern with itemset share framework is tricky one as no anti-monotonicity property with interesting measure. Former works on this problem employ a two-phase, candidate generation approach with one exception that is however inefficient and not scalable with large database. This paper reviews former implementation and strategies to mine out high utility pattern in details. We will look ahead some strategies of mining sequential pattern.
Research Interests:
— Satellite image processing is one of the important research areas in the field of digital image processing and is a challenging task for the researchers. It is often required to remove noise and smooth the image to highlight certain... more
— Satellite image processing is one of the important research areas in the field of digital image processing and is a challenging task for the researchers. It is often required to remove noise and smooth the image to highlight certain features of interest for image analysis and extracting significant information from satellite images often termed as image enhancement. It is an important step for overall image recognition and interpretation process and is a pre processing step that serves as an important step towards the solution for image analysis. Image enhancement can be performed in spatial or frequency domains. In this paper, we focus on spatial domain enhancement techniques with respect to satellite images. Some of the important image enhancement techniques such as contrast stretching, decorrelation stretch, histogram equalization and contrast limited adaptive histogram equalization are experimented and compared for visual interpretability. Two parameters, Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR) are used for performance evaluation. The techniques are tested using 20 LandSat satellite images with different illumination effects. The experimentation was carried out using soft computing tool Matlab. It was observed that for satellite images, contrast stretching gives better results as compared to other techniques.
Research Interests:
— Face is an important biometric feature for personal identification. Human beings easily detect and identify faces in a scene but it is very challenging for an automated system to achieve such objectives. Hence there is need to have... more
— Face is an important biometric feature for personal identification. Human beings easily detect and identify faces in a scene but it is very challenging for an automated system to achieve such objectives. Hence there is need to have reliable identification method for user interactions. A computer application which automatically identifies or verifies a person from a digital image or a video frame from a video source, is presented and it is done by comparing selected facial features from the image and a facial database. One of the retrieving method is Content based image retrieval (CBIR), which retrieves images on the basis of automatically derived features. This paper draws points from it but, focuses on a low-dimensional feature based indexing technique for achieving efficient and effective retrieval performance. A static appearance based retrieving system for face recognition referred to as hierarchical model is presented based on singular value decomposition (SVD) is proposed in this paper and is different from principal component analysis (PCA), which effectively considers only Euclidean structure of face space for analysis and leads to poor classification performance in case of great facial variations such as expression, lighting, occlusion and so on, due to the fact the image gray value matrices on which they manipulate are very sensitive to these facial variations. It is a known fact that every image matrix can always have the well known singular value decomposition (SVD) and can be regarded as a composition of a set of base images generated by SVD and further it is pointed out that base images are sensitive to the composition of the face image. Finally the experimental results show that SVD has the advantage of providing a better representation and achieves lower error rates in face recognition but it has the disadvantage that it drags the performance evaluation. So, in order to overcome that, a controlling parameter ‗α ', which ranges from 0 to 1 is introduced a better result is achieved for α=0.4 when compared to the other value of ‗α‖ and it is also seen that it reduces classification redundancy. Keywords— Face recognition, Feature based methods, singular value decomposition Euclidean distance Original gray value matrix.
Research Interests:
— A new swarm intelligent algorithm, called as Animal Migration Optimization (AMO). This paper discusses brief introduction of few optimization techniques. Optimization techniques used for finding optimal solutions. The efficiency of AMO... more
— A new swarm intelligent algorithm, called as Animal Migration Optimization (AMO). This paper discusses brief introduction of few optimization techniques. Optimization techniques used for finding optimal solutions. The efficiency of AMO is not appropriate due to its execution time. The efficiency of animal migration optimization algorithm (AMO)is increase by using few benchmark functions and which show the animal migration algorithm performance and it " s working in order to confirm the presentation of AMO including four benchmark functions – Sum, Ackley, Baele and Rosenbrock are employed. The benchmark functions which are considered as standard functions increase the efficiency and minimize the time.
Research Interests:
All input is evil until proven otherwise!‖‖so security technology come into play.With the rapid growth of interest in the Internet, network security has become a major concern to companies throughout the world. The fact that the... more
All input is evil until proven otherwise!‖‖so security technology come into play.With the rapid growth of interest in the Internet, network security has become a major concern to companies throughout the world. The fact that the information and tools needed to penetrate the security of corporate networks are widely available has increased that concern. Because of this increased focus on network security, network administrators often spend more effort protecting their networks than on actual network setup and administration. Tools that probe for system vulnerabilities, such as the Security Administrator Tool for Analyzing Networks (SATAN), and some of the newly available scanning and intrusion detection packages and appliances, assist in these efforts, but these tools only point out areas of weakness and may not provide a means to protect networks from all possible attacks. Thus, as a network administrator, you must constantly try to keep abreast of the large number of security issues confronting you in today's world. This paper describes many of the security issues that arise when connecting a private network. Understand the types of attacks that may be used by hackers to undermine network security. For decades, technology has transformed almost every aspect of business, from the shop floor to the shop door. While technology was a fundamental enabler, it was often driven from an operational or cost advantage and seen as separate from business itself. The new reality is that technology doesn't support the business—technology powers the business. IT risks are now business risks and IT opportunities are now business opportunities.
Research Interests:
— Visual data are transmitted as the high quality digital images in the major fields of communication in all of the modern applications. These images on receiving after transmission are most of the times corrupted with noise. This thesis... more
— Visual data are transmitted as the high quality digital images in the major fields of communication in all of the modern applications. These images on receiving after transmission are most of the times corrupted with noise. This thesis focused on the work which works on the received image processing before it is used for particular applications. We applied image denoising which involves the manipulation of the DWT coefficients of noisy image data to produce a visually high standard denoised image. This works consist of extensive reviews of the various parametric and non parametric existing denoising algorithms based on statistical estimation approach related to wavelet transforms connected processing approach and contains analytical results of denoising under the effect of various noises at different intensities .These different noise models includes additive and multiplicative type's distortions in images used. It includes Gaussian noise and speckle noise. The denoising algorithm is application independent and giving a very high speed performance with desired noise less image even in the presence of high level distortion. Hence, it is not required to have prior knowledge about the type of noise present in the image because of the adaptive nature of the proposed denoising algorithm.
Research Interests:
Scanning Laser ophthalmoscopes (SLOs) are going to be used for early detection of retinal diseases. it's a method of examination of the attention. The advantage of exploitation SLO is its wide field of scan, which can image associate... more
Scanning Laser ophthalmoscopes (SLOs) are going to be used for early detection of retinal diseases. it's a method of examination of the attention. The advantage of exploitation SLO is its wide field of scan, which can image associate outsized an area of the membrane for higher identification of the retinal diseases. On the opposite aspect, throughout the imaging methodology, artefacts like eyelashes and eyelids are also imaged in conjunction with the retinal space. This brings an enormous challenge on the thanks to exclude these artefacts. In planned novel approach to automatically extract out true retinal house from associate SLO image based mostly on image method and machine learning approaches. the straightforward Linear unvaried cluster (SLIC) is that the rule utilised in super-pixel calculation. To decrease the unpredictability of image preparing errands and supply associate advantageous primitive image vogue. to scale back the quality of image method tasks and provide a convenient primitive image pattern, conjointly to classified pixels into utterly totally different regions primarily based on the regional size and compactness, referred to as super-pixels. The framework then calculates image based mostly choices reflective textural information and classifies between retinal house and artefacts. The survey presents different methods that are used to detect the artefacts.
Research Interests:
—Impact amplification is aware of augment the good thing about infective agent promoting in informal organizations. The defect of impact growth is that it does not acknowledge specific shoppers from others, despite the likelihood that... more
—Impact amplification is aware of augment the good thing about infective agent promoting in informal organizations. The defect of impact growth is that it does not acknowledge specific shoppers from others, despite the likelihood that some things are often useful for the actual shoppers. For such things, it's a superior system to consider boosting the impact on the actual shoppers. During this paper, we tend to detail an effect boost issue as question handling to acknowledge specific shoppers from others. We tend to demonstrate that the question handling issue is NP-hard and its target capability is sub secluded. We tend to propose a need model for the estimation of the target capability and a fast covetous primarily based shut estimation strategy utilizing the need model. For the need model, we tend to explore a relationship of the way between shoppers. For the covetous technique, we tend to estimate a productive progressive overhauling of the negligible addition to our goal capability. We tend to lead trials to assess the planned technique with real datasets, and distinction the outcomes and people of existing systems that area unit adjusted to the problem. From our trial results, the planned strategy is not any but asking of extent speedier than the prevailing routines by and enormous whereas accomplishing high truth. Also we are implementing Maximum Coverage algorithm in which will post or spread add(product list) as per category wise means we will divide the age category in different age group range by using Maximum Coverage algorithm and that particular adds will be displayed to particular age group users. This allows the marketers to plan and evaluate strategies online for advertised products.
Research Interests:
—A wireless sensor network comprises a number of small sensors that communicate with each other. Each sensor collects the data and communicates through the network to a single processing center that is a base station. The communication of... more
—A wireless sensor network comprises a number of small sensors that communicate with each other. Each sensor collects the data and communicates through the network to a single processing center that is a base station. The communication of node and process of message passing consumes energy. This energy consumption by the nodes to transmit data decreases the network lifetime significantly. Clustering is by far the best solution to save the energy consumption in the context of such network. Clustering divides the sensors into groups, so that sensors communicate information only to cluster heads and then the cluster heads communicate the aggregated information to the processing center so as to save energy. This paper studies and discusses various dimensions and approaches of some broadly discovered algorithms for clustering. It also presents a comparative study of various clustering algorithms and discussion about the potential research areas and the challenges of clustering in wireless sensor networks.
Research Interests:
—Visual surveillance has been a very active research topic in the last few years due to its growing importance in security, law enforcement, and military applications. The project presents moving object detection based on background... more
—Visual surveillance has been a very active research topic in the last few years due to its growing importance in security, law enforcement, and military applications. The project presents moving object detection based on background subtraction for video surveillance system. In all computer vision system, the important step is to separate moving object from background and thus detecting all the objects from video images. The main aim of this paper is to design a bounding box concept for the human detection and tracking system in the presence of crowd. The bounding box around each object can track the moving objects in each frame and it can be used to detect crowd and the estimation of crowd. This paper gives the implementation results of bounding box for detecting objects and its tracking. In order to remove some unwanted pixels, morphological erosion and dilation operation is performed for object edge smoothness. The simulated result shows that used methodologies for effective object detection has better accuracy and with less processing time consumption rather than existing methods.
Research Interests:
Gray Scale Image Processing (GSI) technique has developed using MATLAB " s Digital Signal Processing toolbox to remove the unwanted frequencies present in the SODAR facsimiles and also to enhance the pixel quality (intensity) of the... more
Gray Scale Image Processing (GSI) technique has developed using MATLAB " s Digital Signal Processing toolbox to remove the unwanted frequencies present in the SODAR facsimiles and also to enhance the pixel quality (intensity) of the facsimiles for a better view. In general, the sources of these noises (unwanted frequencies) are airplanes, Choppers, Hi-frequency sound systems and man-made noises nearby SODAR antenna. Already some filtering techniques like Gaussian filter, median filter, and other DSP filter techniques are employed to remove the noises from facsimiles but these techniques are incapable of enhancing the pixel clarity. This GSI computing techniques could be applied directly to SODAR facsimiles, which is not possible with other filtering techniques. The beauty of the technique is that time was taken to complete the process within 10 to 20 seconds. Here in this paper, all mathematical computing techniques are designed and developed through MATLAB.
Research Interests:
Cloud computing in an improve form for grid computing, cluster computing and distributed computing. Cloud computing provides sharing of computing resources such as platform, software, infrastructure and data over the network, on pay and... more
Cloud computing in an improve form for grid computing, cluster computing and distributed computing. Cloud computing provides sharing of computing resources such as platform, software, infrastructure and data over the network, on pay and use basis. Cloud computing provides service PaaS, IaaS and SaaS to various cloud users, by supporting various cloud models private, public, community and hybrid. Cloud computing reduces overall cost and efforts. Day by day numbers of cloud user are increasing rapidly. Higher number of cloud users are requires high computing resources on time, which creates a big challenge for cloud service providers so serve computing resources on time. Various cloud researchers are working on improvement on cloud performance by correct load distribution among cloud user request and computing resources. In this survey paper we are presenting a comparative study of various load distribution method for cloud computing.
Research Interests:
Data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information – making it more accurate, reliable, efficient and beneficial. In data... more
Data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information – making it more accurate, reliable, efficient and beneficial. In data mining various techniques are used-classification, clustering, regression, association mining. These techniques can be used on various types of data; it may be stream data, one dimensional, two dimensional or multi-dimensional data. In this paper we analyze the data mining techniques based on various parameters. All data mining techniques used for prediction, extraction of useful data from a large data base. Each of the techniques have different performance and result .
Research Interests:
— Detection and diagnosis of lung cancer from chest radiographs is one of the most important and difficult task for the radiologists. In this paper, combination of statistical texture and moment invariant features are used to classify the... more
— Detection and diagnosis of lung cancer from chest radiographs is one of the most important and difficult task for the radiologists. In this paper, combination of statistical texture and moment invariant features are used to classify the lung cancer images. These features are extracted from JSRT raw chest X-ray images. The proposed approach is built on two-level architecture. In the first level architecture images are sharpened and segmented to extract the region of interest i.e. lung from the ribs using image processing techniques. In second level architecture, statistical texture and moment invariant based features are extracted depending on the shape characteristics of the region. These features are used as input pattern to the Fuzzy Hypersphere Neural Network (FHSNN) classifier. The experimental result shows that proposed approach is superior in comparison with only statistical texture features in terms of recognition rate, training and testing time. Keywords—Chest Radiography, Computer Tomography (CT), Fuzzy Hypersphere Neural Network (FHSNN), Lung Nodule, Gray level co-occurrence matrix (GLCM)
Research Interests:
—Cognitive Radio (CR) is an emerging wireless communication technology that offers a great solution to scarcity of radio spectrum. Cognitive Radio Network (CRN) is an intellectual wireless communication scheme that conscious of its... more
—Cognitive Radio (CR) is an emerging wireless communication technology that offers a great solution to scarcity of radio spectrum. Cognitive Radio Network (CRN) is an intellectual wireless communication scheme that conscious of its environment. Here, problem is sensing of spectrum by heterogeneous nodes with different computing power variation, and sensing rang. There are two tasks in this networks are primary channel sensing and selection of appropriate unused channel for communication by secondary users. In this article we present an idea to formulate a contention based channel selection algorithm using priority queue scheduling algorithm. In CRN secondary users play the important role of channel selection. This algorithm will avoid collision during data transmission between heterogeneous nodes and improve the entire network throughput.
Research Interests:
The rapid evolution in the mobile communication field, the new alternatives is derived in which mobile devices form a self-creating, self-administering and self-organizing wireless networks. Mobile Ad Hoc Network (MANET) is one such... more
The rapid evolution in the mobile communication field, the new alternatives is derived in which mobile devices form a self-creating, self-administering and self-organizing wireless networks. Mobile Ad Hoc Network (MANET) is one such arbitrary network in which all the nodes are mobile and consists of limited radio transmission range, battery power and channel bandwidth. These Mobile Ad Hoc networks are often used in emergency situations. The frequent change in topology leads to more consumption of energy, therefore saving power in such situations is of prime importance. In this paper compares some existing power consumption reducing algorithm.
Research Interests:
— Defective software modules can leads to ad hoc software failures, shoots up development & maintenance cost and result in customer dissatisfaction. Defect mapping and awareness of its impact in different business applications paves way... more
— Defective software modules can leads to ad hoc software failures, shoots up development & maintenance cost and result in customer dissatisfaction. Defect mapping and awareness of its impact in different business applications paves way to improve its quality. Previous researches show that it has treated all bugs alike. Proper Identification and categorization helps to handle and fix bugs diligently. Evaluation of prediction techniques is mainly based on precision and recall measures. It focuses on the defects in a software system. A prediction of the number of left-out defects in an inspected arte fact can be judiciously used for decision making. An accurate prediction of quantum of defects during testing a software product contributes not only to manage the system testing process but also to estimate its required maintenance. It goes a long way to improve software quality and testing efficiency by building predictive models from code attributes to timely identification of fault-prone modules. In short, this paper provides the prediction of bugs by using data mining techniques such as Association Mining, Classification and Clustering. This complements developers to detect software defects and debug them. Unsupervised techniques come handy for defect prediction in software modules, on a large scale in those cases where defect labels are not present.
Research Interests:
— In this paper, a novel differential LC voltage-controlled oscillator (VCO) is presented. The VCO is based on the gm-boosted structure to relax the oscillation start-up current requirement and reduce the DC power consumption in... more
— In this paper, a novel differential LC voltage-controlled oscillator (VCO) is presented. The VCO is based on the gm-boosted structure to relax the oscillation start-up current requirement and reduce the DC power consumption in comparison to conventional Colpitts structures. In the proposed VCO, a tunable active inductor is utilized as a part of LC tank instead of passive inductor with constant inductance. The proposed VCO is designed and simulated in ADS in a 0.18μm CMOS process. Simulation results indicate that the proposed VCO has a wide tuning range in comparison to other reported designs while consumes less DC power.
Research Interests:
— A wormhole attack is particularly harmful against routing in sensor networks where an attacker receives packets at one location in the network, tunnels and then replays them at another remote location in the network. A wormhole attack... more
— A wormhole attack is particularly harmful against routing in sensor networks where an attacker receives packets at one location in the network, tunnels and then replays them at another remote location in the network. A wormhole attack can be easily launched by an attacker without compromising any sensor nodes. Since most of the routing protocols do not have mechanisms to defend the network against wormhole attacks, the route request can be tunneled to the target area by the attacker through wormholes. We use one of the basic routing protocols called GRPW-Mus used for Supporting Mobile Sinks in Wireless Sensor Networks. GRPW-MuS, a geographical routing protocol for wireless sensor networks , is based on an architecture partitioned by logical levels, on the other hand based on a multipoint relaying flooding technique to reduce the number of topology broadcast. GRPW-MuS uses periodic HELLO packets to neighbor detection. As introduced in Reference [9, 17], the wormhole attack can form a serious threat in wireless sensor networks, especially against many wireless sensor networks routing protocols and location-based wireless security systems. Here, a trust model to handle this attack in GRPW-MuS is provided called GRPW-MuS-s. Using OMNET++ simulation and the MiXiM framework, results show that GRPW-MuS-s protocol only has very small false positives for wormhole detection during the neighbor discovery process (less than GRPW-MuS). The average energy usage at each node for GRPW-MuS-s protocol during the neighbor discovery and route discovery is very low than GRPW-MuS, which is much lower than the available energy at each node. The cost analysis shows that GRPW-MuS-s protocol only needs small memory usage at each node, which is suitable for the sensor network.
Research Interests:
— A novel vertical-cavity surface emitting laser (VCSEL) based on two oxide layers with multiple apertures for the purpose of enlarging window aperture and maintaining single transverse mode operation is suggested and numerically... more
— A novel vertical-cavity surface emitting laser (VCSEL) based on two oxide layers with multiple apertures for the purpose of enlarging window aperture and maintaining single transverse mode operation is suggested and numerically investigated. The oxide layers with multiple aperture sizes structure has a number of advantages including easier fabrication in compare with multi-oxide layer structures, better mechanical stability, and very strong and high single-mode optical output power. The simulation results also show that this structure has a low threshold current. A comprehensive optical-electrical thermal-gain self-consistent VCSEL model is used to simulate and investigate the proposed structure. It has been shown that by using two oxide layers with multiple apertures in VCSEL, high single mode optical output power and a possibility of single-mode VCSEL with a large active area could be achieved. Keywords— VCSEL, Oxide layer, multiple apertures, Transversally single mode laser, Comprehensive VCSEL model.
Research Interests:
— In this paper a ring VCO with high frequency range and low phase noise in 0.18 um CMOS technology is presented. In the proposed VCO, two techniques including current control and forward bias of body is implemented to increase the range... more
— In this paper a ring VCO with high frequency range and low phase noise in 0.18 um CMOS technology is presented. In the proposed VCO, two techniques including current control and forward bias of body is implemented to increase the range of frequency. It is shown that forward bias of the body of control transistor cause to increase the frequency range noticeably. Moreover, by adding an inductor in the body of control transistor, the phase noise is decreased as well. The phase noise in 1 MHz offset frequency is-90 dBc/Hz and the frequency range is 2-14 GHz.
Research Interests:
— A novel low noise amplifier is proposed using low cost 0.18 µm CMOS technology. A resistive-capacitive feedback is used to extend the bandwidth of the amplifier. As the structure is inductor less, it is suitable for low cost integrated... more
— A novel low noise amplifier is proposed using low cost 0.18 µm CMOS technology. A resistive-capacitive feedback is used to extend the bandwidth of the amplifier. As the structure is inductor less, it is suitable for low cost integrated optical interconnects. In this paper Improved Particle Swarm Optimization have applied to determine optimal trans-resistance and noise of proposed structure of amplifier. Simulation results showed a-3 dB bandwidth of 5 GHZ with a trans-impedance gain of ≈ 62 dB ohms. The total voltage source power dissipation is less than 5 mW that is much less than that of conventional trans-impedances. The output noise voltage spectral density is 9.5 nV/sqrt(Hz) with a peak of 15nV/sqrt(Hz), while, the input referred noise current spectral density is below 10pA/sqrt(Hz) within the amplifier frequency band.
Research Interests:

And 58 more

Research Interests:
Research Interests:
National Conference on Recent Innovative Trends in Engineering and Technology (NCRITET-2016) aims to provide an opportunity for academicians, researchers, scientists and industry... more
National    Conference    on    Recent    Innovative    Trends    in    Engineering    and    Technology (NCRITET-2016)    aims    to    provide    an    opportunity    for    academicians,    researchers, scientists  and  industry  experts  engaged  in  teaching,  research  and  development,  gives  a platform  to  present,  discuss  ideas  and  share  their  views  to  solve  the  real  world  complex challenges in Engineering and Technology.
Research Interests: