Skip to main content
IJCT Editor
  • https://cirworld.com/index.php/ijct/index
  • IJCT is an International peer-reviewed e-journal that is devoted to fields of Computers and provides rapid publicatio... moreedit
Load balancing is one of the essential factors to enhance the working performance of the cloud service provider. Cloud Computing is an emerging computing paradigm. It aims to share data, calculations, and service transparently over a... more
Load balancing is one of the essential factors to enhance the working performance of the cloud service provider. Cloud Computing is an emerging computing paradigm. It aims to share data, calculations, and service transparently over a scalable network of nodes. Since Cloud computing stores the data and disseminated resources in the open environment. Since, cloud has inherited characteristic of distributed computing and virtualization there is a possibility of machines getting unused. Hence, in this paper, different load balancing algorithms has been studied. Different kinds of job types have been discussed and their problems have been reviewed. In the cloud storage, load balancing is a key issue. It would consume a lot of cost to maintain load information, since the system is too huge to timely disperse load. Load balancing is one of the main challenges in cloud computing which is required to distribute the dynamic workload across multiple nodes to ensure that no single node is overwhelmed.
Research Interests:
Cloud computing is Internet based development and use of computer technology. It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have... more
Cloud computing is Internet based development and use of computer technology. It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing is a hot topic all over the world nowadays, through which customers can access information and computer power via a web browser. As the adoption and deployment of cloud computing increase, it is critical to evaluate the performance of cloud environments. Currently, modeling and simulation technology has become a useful and powerful tool in cloud computing research community to deal with these issues. Cloud simulators are required for cloud system testing to decrease the complexity and separate quality concerns. Cloud computing means saving and accessing the data over the internet instead of local storage. In this paper, we have provided a short review on the types, models and architecture of the cloud environment.
Research Interests:
Cloud Computing is being used widely all over the world by many IT companies as it provides various benefits to the users like cost saving and ease of use. However, with the growing demands of users for computing services, cloud providers... more
Cloud Computing is being used widely all over the world by many IT companies as it provides various benefits to the users like cost saving and ease of use. However, with the growing demands of users for computing services, cloud providers are encouraged to deploy large datacenters which consume very high amount of energy and also contribute to the increase in carbon dioxide emission in the environment. Therefore, we require to develop techniques which will help to get more environment friendly computing i.e. Green Cloud Computing. Cloud computing is an increasingly popular paradigm for accessing computing resources. This paper discusses some of the research challenges for cloud computing from an enterprise or organizational perspective, and puts them in context by reviewing the existing body of literature in cloud computing. Various research challenges relating to the following topics are discussed: the organizational changes brought about by cloud computing; the economic and organizational implications of its utility billing model; the security, legal and privacy issues that cloud computing raises. It is important to highlight these research challenges because cloud computing is not simply about a technological improvement of data centers but a fundamental change in how IT is provisioned and used. This type of research has the potential to influence wider adoption of cloud computing in enterprise, and in the consumer market too.
Research Interests:
EEG (electroencephalography) energy is an important evaluation indicator in brain death determination based on EEG analysis. In related works, the static EEG energy value can be discovered using EMD (empirical mode decomposition), MEMD... more
EEG (electroencephalography) energy is an important evaluation indicator in brain death determination based on EEG analysis. In related works, the static EEG energy value can be discovered using EMD (empirical mode decomposition), MEMD (multivariate empirical mode decomposition) and 2T-EMD (turning tangent empirical mode decomposition) for EEG-based coma and quasi-brain-death analysis. However such methods are not time-varying and feasible. In this paper, we firstly propose the Dynamic 2T-EMD algorithm to evaluate the dynamic patients' EEG energy variation by the means of time window and time step method. With the time window sliding along the time axis in a time step, EEG energy of corresponding time step is computed and stored. The proposed algorithm is applied to analyze 19 cases of coma patients' EEG and 17 cases of quasi-brain-death patients' EEG. Two typical patients in coma and quasi-brain-death state and one special case who was from coma to quasi-brain-death have been taken as examples to give the algorithm performance. Results show that EEG energy in coma state are obviously higher than that in quasi-brain-death state, and even present the EEG energy change trend of every case, which can prevent loss of information and wrong analysis results caused by noise interference and provide scientific basis for doctors to evaluate patients' consciousness levels in brain death determination. The proposed algorithm will be very helpful to develop the real time brain death diagnostic system.
Research Interests:
In speaker diarization, the speech/voice activity detection is performed to separate speech, non-speech and silent frames. Zero crossing rate and root mean square value of frames of audio clips has been used to select training data for... more
In speaker diarization, the speech/voice activity detection is performed to separate speech, non-speech and silent frames. Zero crossing rate and root mean square value of frames of audio clips has been used to select training data for silent, speech and non-speech models. The trained models are used by two classifiers, Gaussian mixture model (GMM) and Artificial neural network (ANN), to classify the speech and non-speech frames of audio clip. The results of ANN and GMM classifier are compared by Receiver operating characteristics (ROC) curve and Detection ErrorTradeoff (DET) graph. It is concluded that neural network based SAD comparatively better than Gaussian mixture model based SAD.
Research Interests:
Over the last twenty years face recognition has made immense progress based on statistical learning or subspace discriminant analysis. This paper investigates a technique to reduce features necessary for face recognition based on local... more
Over the last twenty years face recognition has made immense progress based on statistical learning or subspace discriminant analysis. This paper investigates a technique to reduce features necessary for face recognition based on local binary pattern, which is constructed by applying wavelet transform into local binary pattern. The approach is evaluated in two ways: wavelet transform applied to the LBP features and wavelet transform applied twice on the original image and LBP features. The resultant data are compared to the results obtained without applying wavelet transform, revealing that the reduction base one wavelet achieves the same or sometimes improved accuracy. The proposed algorithm is experimented on the Cambridge ORL Face database.
Research Interests:
Nowadays surveillance systems have been widely deployed in various places and generate massive amount of video data every day. This raises threats of unauthorized access and potential privacy leakage as the recorded videos usually contain... more
Nowadays surveillance systems have been widely deployed in various places and generate massive amount of video data every day. This raises threats of unauthorized access and potential privacy leakage as the recorded videos usually contain rich identifiable information such as facial biometrics. In order to mitigate the threats, many existing methods perform symmetric encryption on the entire frames in the videos. Unfortunately, these methods could introduce additional computation cost and storage. Moreover, as surveillance systems could be a part of distributed system, the key management is critical and challenging. In this paper, we propose a novel method which incorporates background subtraction technique and RSA encryption algorithm. Rather than encrypting the entire frames of the videos, the proposed detect the regions around moving objects in the frames of video and then perform RSA encryption on the detected regions. And RSA encryption technique has its advantages of key distribution and management. Our experimental results show that the proposed method only involve moderate computation cost and storage.
Research Interests:
Content Based Image Retrieval (CBIR) techniques are becoming an essential requirement in the multimedia systems with the widespread use of internet, declining cost of storage devices and the exponential growth of un-annotated digital... more
Content Based Image Retrieval (CBIR) techniques are becoming an essential requirement in the multimedia systems with the widespread use of internet, declining cost of storage devices and the exponential growth of un-annotated digital image information available in recent years.  Therefore multi query systems have been used rather than a single query in order to bridge the semantic gaps and in order to understand user’s requirements. Moreover, query replacement algorithm has been used in the previous works in which user provides multiple images to the query image set referred as representative images. Feature vectors are extracted for each image in the representative image set and every image in the database. The centroid, Crep of the representative images is obtained by computing the mean of their feature vectors. Then every image in the representative image set is replaced with the same candidate image in the dataset one by one and new centroids are calculated for every replacement .The distance between each of the centroids resulting from the replacement and the representative image centroid Crep is calculated using Euclidean distance. The cumulative sum of these distances determines the similarity of the candidate image with the representative image set and is used for ranking the images. The smaller the distance, the similar will be the image with the representative image set. But it has some research gaps like it takes a lot of time to extract feature of each and every image from the database and compare our image with the database images and complexity as well as cost increases. So in our proposed work, the KNN algorithm is applied for classification of images in the database image set using the query images and the candidate images are reduced to images returned after classification mechanism which leads to decrease the execution time and reduce the number of iterations. Hence due to hybrid model of multi query and KNN, the effectiveness of image retrieval in CBIR system increases. The language used in this work is C /C++ with Open CV libraries and IDE is Visual studio 2015. The experimental results show that our method is more effective to improve the performance of the retrieval of images.
Research Interests:
Mobile Ad-hoc Network (MANET) is a kind of wireless network that has the most challenging network infrastructure. It is formed using the mobile nodes without any centralized administration from the security perspective and is a... more
Mobile Ad-hoc Network (MANET) is a kind of wireless network that has the most challenging network infrastructure. It is formed using the mobile nodes without any centralized administration from the security perspective and is a self-configuring fastest emerging wireless technology, each node on the MANET will act like a router which forwards the packets. Dynamic nature of this network makes routing protocols to play a prominent role in setting up efficient route among a pair of nodes. Dynamic Source Routing (DSR) and Ad-hoc On-Demand Distance Vector (ADOV) is a reactive MANET routing protocols. Most of the attacks on MANETs are routing protocol attacks. Attacks on routing protocols, especially internal attacks will cause the damage to MANETs. Sinkhole and black hole attacks a re a type of internal attack which is affected by attempting to draw all network traffic to malicious nodes that fake routing update and degrade the performance of the network. The black hole nodes should be detected from the network as early as possible via detection mechanism and should also guarantee the higher detection rate and less cross-over error rate. In this paper, we studied the characteristics of black hole attack and how it will affect the performance of the distance vector routing on demand routing protocol such as (ADOV) protocol, which recognizes the presence of black hole node from packet flow information between nodes and isolates it from the network via applying AODV protocol that one of popular routing protocol. We have evaluated the performance of the system using widely used simulator NS2, results prove the effectiveness of our prevention and detection method.
Research Interests:
The advent of digital systems for the production and transmission of information decisively influences human progress and represents the future in any field of social life. In order to survive, organizations must correlate the objectives... more
The advent of digital systems for the production and transmission of information decisively influences human progress and represents the future in any field of social life. In order to survive, organizations must correlate the objectives to the new trend of the society based on information, deeply marked by globalization. In recent years new computational paradigm were proposed and adopted. These include Cloud computing. Together with the stabilization of technologies related to Cloud computing, SQL databases have become more attractive due to native support for scalability and distributed architecture and the fact that many of these can be offered as services. The paper presents a few important aspects about cloud computing and proposed a new database designed to be implemented in cloud.  We offer a new model and an example of implementing in Romanian medicine1.
Research Interests:
Code switching is a widely observed but less studied phenomenon, especially in multilingual and multicultural communities. So, the present study investigated the status of grammatical code switching among Iranian EFL university students.... more
Code switching is a widely observed but less studied phenomenon, especially in multilingual and multicultural communities. So, the present study investigated the status of grammatical code switching among Iranian EFL university students. Also, the role of the teacher in managing the code switching was investigated, too. Two classes including 96 participants from two different universities were observed carefully and the required data were collected. Analyses of the data revealed varying nature of code switching in both settings. Moreover, the obtained frequencies revealed the fact that among the four types of „trigger words‟ only „proper nouns‟ and discourse marker „OK‟ remarkably were responsible for code switching in one setting, while „lexical transfer‟ and also the discourse marker „OK‟ lead to code switch in another. Meanwhile, four functions for code switching were determined, which included providing equivalents for the key word, showing humor, inspiring learners, and explaining the required assignments, as teacher‟s role in dealing with code switching .
Keywords
Code Switching, EFL Classroom, Interference and Interaction
Research Interests:
Computer memory is expensive and the recording of data captured by a webcam needs memory. In order to minimize the memory usage in recording data from human motion as recorded from the webcam, this algorithm will use motion detection as... more
Computer memory is expensive and the recording of data captured by a webcam needs memory. In order to minimize the memory usage in recording data from human motion as recorded from the webcam, this algorithm will use motion detection as applied to a process to measure the change in speed or vector of an object in the field of view. This application only works if there is a motion detected and it will automatically save the captured image in its designated folder.
Research Interests:
Owing to the conception of big data and massive data processing there are increasing owes related to the temporal aspects of the data processing. In order to address these issues a continuous progression in data collection, storage... more
Owing to the conception of big data and massive data processing there are increasing owes related to the temporal aspects of the data processing. In order to address these issues a continuous progression in data collection, storage technologies, designing and implementing large-scale parallel algorithm for Data mining is seen to be emerging in a rapid pace. In this regards, the Apriori algorithms have a great impact for finding frequent item sets using candidate generation. This paper presents highlights on parallel algorithm for mining association rules using MPI for passing message base in the Master-Slave based structural model.
Research Interests:
Cloud Computing provides different types of services such as SaaS, PaaS, IaaS. Each of them have their own security challenges, but IaaS undertakes all types of challenges viz., network attack ,behaviour based attack, request based... more
Cloud Computing provides different types of services such as SaaS, PaaS, IaaS. Each of them have their own security challenges, but IaaS undertakes all types of challenges viz., network attack ,behaviour based attack, request based attacks i.e handling the requests from untrusted users, XSS (cross site scripting attack), DDOS and many more. These attacks are independent of each other and consequently the QoS provided by cloud is compromised. This paper proposes a History aware Behaviour based IDS (Intrusion Detection System) BIDS. BIDS provides detection of untrusted users, false requests that may lead to spoofing, XSS or DOS attack and many more such attacks. In addition, certain cases where user login or password is compromised. History aware BIDs can be helpful in detecting such attacks and maintaining the QoS provided to the user in cloud IaaS (Infrastructure as a service).
This paper presents a review on digital image filtering techniques. The main emphasis is on median filtering and its extended versions like hybrid median filtering, relaxed median filtering etc. It is found that still median filtering... more
This paper presents a review on digital image filtering techniques. The main emphasis is on median filtering and its extended versions like hybrid median filtering, relaxed median filtering etc. It is found that still median filtering demands some sort of enhancements as it is best for salt and pepper noise only. By conducting a survey suitable gaps are found in existing literature. In the end comparison table is also drawn among the existing techniques.
Research Interests:
Gaussian Mixture Models (GMMs) has been proposed for off-line signature verification. The individual Gaussian components are shown to represent some global features such as skewness, kurtosis, etc. that characterize various aspects of a... more
Gaussian Mixture Models (GMMs) has been proposed for off-line signature verification. The individual Gaussian components are shown to represent some global features such as skewness, kurtosis, etc. that characterize various aspects of a signature, and are effective for modeling its specificity. The learning phase involves the use of Gaussian Mixture Model (GMM) technique to build a reference model for each signature sample of a particular user. The verification phase uses three layers of statistical techniques. The first layer involves computation of GMM-based log-likelihood probability match score, second layer performs the mapping of this score into soft boundary ranges of acceptance or rejection through the use of z-score analysis and normalization function, thirdly, threshold is used to arrive at the final decision of accepting or rejecting a given signature sample. The focus of this work is on faster detection of authenticated signature as no vector analysis is done in GMM. From the experimental results, the new features proved to be more robust than other related features used in the earlier systems. The FAR (False Acceptance Rate) and FRR (False Rejection Rate) for the genuine samples is 0.15 and 0.19 respectively. Mixture Model (GMM), Z-score analysis, False Acceptance Rate(FAR), False Rejection Rate(FRR)
Research Interests:
The Campus Local Area Network (CLAN) of academic institutions interconnect computers ranging from one hundred to about twenty five hundred and these computers are located in academic building(s), hostel building(s), faculty quarter(s),... more
The Campus Local Area Network (CLAN) of academic institutions interconnect computers ranging from one hundred to about twenty five hundred and these computers are located in academic building(s), hostel building(s), faculty quarter(s), students amenities centre, etc all around the campus. The students, faculty and the supporting staff members use the network primarily for internet usage at both personal and professional levels and secondarily for usage of the available services and resources. Various web based services viz: Web Services, Mail Services, DNS, and FTP services are generally made available in the campus LAN. Apart from these services various intranet based services are also made available for the users of the LAN. Campus LAN users from the hostels change very frequently and also sometime become targets (we call as soft targets) to the attackers or zombie because of either inadequate knowledge to protect their own computer/ laptop, which is also a legitimate node of the campus LAN; or their enthusiastic nature of experimentation. The interconnectivity of these legitimates nodes of the campus LAN and that of the attackers in the World Wide Web, make the computers connected in the LAN (nodes) an easy target for malicious users who attempt to exhaust the resources by launching Distributed Denial-of-Service (DDoS) attacks. In this paper we present a technique to mitigate the distributed denial of service attacks in campus wide LAN by limiting the bandwidth of the affected computers (soft targets) of the virtual LAN from a unified threat management (UTM) firewall. The technique is supported with help of bandwidth utilization report of the campus LAN with and without implementation of bandwidth limiting rule; obtained from the UTM network traffic analyzer. The graphical analyzer report on the utilization of the bandwidth with transmitting and receiving bits of the campus LAN after implementation of our bandwidth limiting rule is also given.
Research Interests:
In this paper we are proposing a GUI based Prototype for user centered environment like class room, library hall, laboratory, meeting hall, coffee shop, kitchen, living room and bedroom, which recommends useful services based on the... more
In this paper we are proposing a GUI based Prototype for user centered environment like class room, library hall, laboratory, meeting hall, coffee shop, kitchen, living room and bedroom, which recommends useful services based on the user's context. Service recommendation is mainly based on parameters such as user, location, time, day and mood. In addition whenever the conflict arises among different users it will be resolved using some conflict resolving algorithms. The motivation behind the proposed work is to improve the user satisfaction level and to improve the social relationship between user and devices The prototype contains simulated sensors which are used to capture the raw context information, which is then described with meaningful English sentence and services are recommended based on user's situation. The proposed conflict resolving algorithms are Rule based algorithm, Bayesian probability based algorithm and Rough set theory based algorithm. The amount of conflicts resolved by these algorithms is also analyzed at the end.
Designing heterogeneous distributed systems requires of the use of tools that facilitate the deployment and the interaction between platforms. In this paper we propose using Simple Object Access Protocol (SOAP) and REpresentational State... more
Designing heterogeneous distributed systems requires of the use of tools that facilitate the deployment and the interaction between platforms. In this paper we propose using Simple Object Access Protocol (SOAP) and REpresentational State Transfer (REST), two main approaches for creating applications based on distributed services, for distributed computation. Our aim is to demonstrate how they could be used to develop evolutionary computation systems on heterogeneous platforms, taking advantage of their ability to deal with heterogeneous infrastructures and environments, and giving support for parallel implementations with a high platform flexibility. Both approaches are different and present some advantages and disadvantages for interfacing to web services: SOAP is conceptually more difficult (has a steeper learning curve) and more " heavyweight " than REST, although it lacks of standards support for security. The results obtained on different experiments have shown that both SOAP and REST can be used as communication protocol for distributed evolutionary computation. Results obtained are comparable, however for large amounts of data (big messages), REST communications take longer than SOAP communications.
Research Interests:
Reliable operations of power transformers are necessary for effective transmission and distribution of power supply. During normal functions of the power transformer, distinct types of faults occurs due to insulation failure, oil aging... more
Reliable operations of power transformers are necessary for effective transmission and distribution of power supply. During normal functions of the power transformer, distinct types of faults occurs due to insulation failure, oil aging products, overheating of windings, etc., affect the continuity of power supply thus leading to serious economic losses. To avoid interruptions in the power supply, various software fault diagnosis approaches are developed to detect faults in the power transformer and eliminate the impacts. SVM and SVM-SMO are the software fault diagnostic techniques developed in this paper for the continuous monitoring and analysis of faults in the power transformer. The SVM algorithm is faster, conceptually simple and easy to implement with better scaling properties for few training samples. The performances of SVM for large training samples are complex, subtle and difficult to implement. In order to obtain better fault diagnosis of large training data, SVM is optimized with SMO technique to achieve high interpretation accuracy in fault analysis of power transformer. The proposed methods use Dissolved Gas-in-oil Analysis (DGA) data set obtained from 500 KV main transformers of Pingguo Substation in South China Electric Power Company. DGA is an important tool for diagnosis and detection of incipient faults in the power transformers. The Gas Chromatograph (GC) is one of the traditional methods of DGA, utilized to choose the most appropriate gas signatures dissolved in transformer oil to detect types of faults in the transformer. The simulations are carried out in MATLAB software with an Intel core 3 processor with speed of 3 GHZ and 2 GB RAM PC. The results obtained by optimized SVM and SVM-SMO are compared with the existing SVM classification techniques. The test results indicate that the SVM-SMO approach significantly improve the classification accuracy and computational time for power transformer fault classification. 1688 | P a g e A u g 2 0 , 2 0 1 3
This paper presents novel algorithms and architecture for a Robot based agricultural implement. The application is for tilling the agricultural field. The hardware consists of a platform with four wheels and a mechanism to facilitate the... more
This paper presents novel algorithms and architecture for a Robot based agricultural implement. The application is for tilling the agricultural field. The hardware consists of a platform with four wheels and a mechanism to facilitate the forward, reverse and lateral movement of the wheels. The platform also houses a turn table, a lift and a plough. Various user defined inputs may be programmed such as length and breadth of the field, spacing between two till lines and depth of tilling. Thereafter, the entire tilling operation of the field is automated. The Robot based vehicle begins the operation from the top left corner and moves towards right tilling the field as it moves forward. Once the required length of field is reached, the vehicle halts and moves to the next row for a specified spacing between tilled rows. The movement from one row to another is in a lateral fashion by rotating the Robotic vehicle wheels by 90 degrees. No tilling is carried out till the next row is reached. The tilling operation is resumed by rotating the plough by 180 degrees and moving the vehicle in the reverse direction. This process continues till the end of the field, covering the entire breadth and maintaining the desired spacing and depth. This automated tilling requires the development of novel algorithms and an optimized architecture, which is presented in this paper. The system is user friendly and upgradable. The entire system has been realized using Verilog and is RTL compliant. The design is both platform and technology independent. The design has been simulated using ModelSim.
Online Public Access Catalogue is playing a vital role in Central Libraries and University/College Libraries. Most of the College libraries are using OPAC for easy search and retrieval of the books and it's Status in a Particular Library.... more
Online Public Access Catalogue is playing a vital role in Central Libraries and University/College Libraries. Most of the College libraries are using OPAC for easy search and retrieval of the books and it's Status in a Particular Library. The main limitation in the OPAC we are using in current system is; that we are able to search and retrieve the information about the books of that particular library, but not outside of it. When we have a scenario like if we have multiple libraries in a campus, if we need to search a book in all the libraries, we have only two possibilities to build OPAC, once to make the single database for all the Libraries or you need to ask the user to search in all OPAC Systems of respective Library manually. But both above solutions are not feasible in real time environment. So in order to have the above scenario a feasible solution we need to build a distributed environment for the OPAC, which will have all the individual databases connected remotely.
The study is an online, computer aided tool that was designed primarily for the conduct of online examination. The system was created using PHP, a web based scripting language, and MySQL as the database software. The system focuses on the... more
The study is an online, computer aided tool that was designed primarily for the conduct of online examination. The system was created using PHP, a web based scripting language, and MySQL as the database software. The system focuses on the automation of students' examinations; preparation, scheduling, checking and grading. A database is provided for the storage of exam questions, answers to questions and students' records. The system allows instructors to create an exam by entering questions with its corresponding answers into the database. Instructors are provided with three options on the type of exam; these include, True or False, Multiple Choice and Fill in the Blanks. There are three account types based on the intended users. One is the Administrator Account; this can be used to create instructor accounts. It can also be used to delete or suspend other accounts based on activity status. The Instructor Account allows teachers to create student accounts and enroll the same. This account can be used also to create, activate, edit, delete exams and monitor students' performances. The Student Account is for the officially enrolled students where they can take exams and view scores even from previous examinations. This software allows instructors to keep track of students' performances from all exams since the results will be stored in a database linked to an online system. While taking the online exam, students can choose the number of exam questions that will be displayed on the screen at a given time. A student can take the exam only on the specified date and time set by the instructor. Ideally, a particular exam should be taken only once. In cases of retakes due to valid reasons and special exam considerations, the instructor is given the option to administer the previously activated exam, edit or create a new set of questions. One limitation though, this online system is not to be used to compute for the class performance for the final grade since this requires other components such as seat works, graded recitations, laboratory activities, etc. This only computes and shows the scores from previous exams and the average. Online exam maker, online exam checker, online exam maker design, online exam maker development 1599 | P a g e A u g 2 0 , 2 0 1 3
We deploy BT node (sensor) that offers passive and active sensing capability to save energy. BT node works in passive mode for outdoor communication and active for indoor communication. The BT node is supported with novel automatic energy... more
We deploy BT node (sensor) that offers passive and active sensing capability to save energy. BT node works in passive mode for outdoor communication and active for indoor communication. The BT node is supported with novel automatic energy saving (AES) mathematical model to decide either modes. It provides robust and faster communication with less energy consumption. To validate this approach, we use two types of simulations: Test bed simulation is performed to automate the server through mobile phone using AES model. Ns2 simulation is done to simulate the behavior of network with supporting mathematical model. The main objective of this research is to access remotely available several types of servers, laptops, desktops and other static and moving objects. This prototype is initially deployed to control MSCS [13] & [14] from remote place through mobile device. The prototype can further be implemented to handle several objects simultaneously in university and other organizations consuming less energy and resources.
Data Stream Mining algorithms performs under constraints called space used and time taken, which is due to the streaming property. The relaxation in these constraints is inversely proportional to the streaming speed of the data. Since the... more
Data Stream Mining algorithms performs under constraints called space used and time taken, which is due to the streaming property. The relaxation in these constraints is inversely proportional to the streaming speed of the data. Since the caching and mining the streaming-data is sensitive, here in this paper a scalable, memory efficient caching and frequent itemset mining model is devised. The proposed model is an incremental approach that builds single level multi node trees called bushes from each window of the streaming data; henceforth we refer this proposed algorithm as a Tree (bush) based Incremental Frequent Itemset Mining (TIFIM) over data streams.
The scope of networked embedded systems is rapidly increasing day by day due to their demand for monitoring and controlling appliances in home as well as industry. Embedded systems with networking provides web access for the industrial... more
The scope of networked embedded systems is rapidly increasing day by day due to their demand for monitoring and controlling appliances in home as well as industry. Embedded systems with networking provides web access for the industrial and research centers optimization. Data acquisition system with web access combindly gives the easy implementation of the system which uses ARM processor for control purpose and GPRS technology along with GSM is used for communication around the world. The real time operating system plays a crucial role in the system where the embedded device is booted with µC/OS, which is commonly used RTOS for hard real time systems. The proposed system consists of an µCOS configured for real time scheduling in industrial applications which eliminates the need for server maintenance. This system enhances security by remotely monitoring various appliances.
Research Interests:
This paper presents the concept of usage of hesitation index in optimization problem under uncertainty. Our technique is an extension of idea of intuitionistic fuzzy optimization technique, proposed by Plamen P. Angelov in 1997, which is... more
This paper presents the concept of usage of hesitation index in optimization problem under uncertainty. Our technique is an extension of idea of intuitionistic fuzzy optimization technique, proposed by Plamen P. Angelov in 1997, which is widely considered as a successful intuitionistic fuzzy optimization tool by researchers all over the world. It is well known that the advantages of the intuitionistic fuzzy optimization problems are twofold: firstly, they give the richest apparatus for formulation of optimization problems and on the other hand, the solution of intuitionistic fuzzy optimization problems can satisfy the objective(s) with bigger degree of satisfaction than the analogous fuzzy optimization problem and the crisp one. Angelov's approach is an application of the intuitionistic fuzzy (IF) set concept to optimization problems. In his approach, the degree of acceptance is maximized while the degree of rejection is minimized. In our approach, not only the degree of acceptance is maximized and the degree of rejection is minimized but also the degree of hesitation is minimized. For the sake simplicity alone, the same problem, as studied by Angelov, is considered. Varied importance (and hence weights) to each of the degree of acceptance and the degree of rejection and the degree of hesitation have been given. Tables with these results are formulated and compared among.
This paper presents the concept of usage of hesitation index in optimization problem under uncertainty. Our technique is an extension of idea of intuitionistic fuzzy optimization technique, proposed by Plamen P. Angelov in 1997, which is... more
This paper presents the concept of usage of hesitation index in optimization problem under uncertainty. Our technique is an extension of idea of intuitionistic fuzzy optimization technique, proposed by Plamen P. Angelov in 1997, which is widely considered as a successful intuitionistic fuzzy optimization tool by researchers all over the world. It is well known that the advantages of the intuitionistic fuzzy optimization problems are twofold: firstly, they give the richest apparatus for formulation of optimization problems and on the other hand, the solution of intuitionistic fuzzy optimization problems can satisfy the objective(s) with bigger degree of satisfaction than the analogous fuzzy optimization problem and the crisp one. Angelov's approach is an application of the intuitionistic fuzzy (IF) set concept to optimization problems. In his approach, the degree of acceptance is maximized while the degree of rejection is minimized. In our approach, not only the degree of acceptance is maximized and the degree of rejection is minimized but also the degree of hesitation is minimized. For the sake simplicity alone, the same problem, as studied by Angelov, is considered. Varied importance (and hence weights) to each of the degree of acceptance and the degree of rejection and the degree of hesitation have been given. Tables with these results are formulated and compared among.
In this Paper we propose a highly scalable image compression scheme based on the set partitioning in hierarchical trees (SPIHT) algorithm. Our algorithm called highly scalable SPIHT (HS-SPIHT), supports spatial and SNR scalability and... more
In this Paper we propose a highly scalable image compression scheme based on the set partitioning in hierarchical trees (SPIHT) algorithm. Our algorithm called highly scalable SPIHT (HS-SPIHT), supports spatial and SNR scalability and provides a bit stream that can be easily adapted (reordered) to given bandwidth and resolution requirements by a simple transcoder (parser). The HS-SPIHT algorithm adds the spatial scalability feature without sacrificing the SNR embeddedness property as found in the original SPIHT bit stream. HS-SPIHT finds applications in progressive Web browsing, flexible image storage and retrieval, and image transmission over heterogeneous networks. Here we have written the core processor Microblaze is designed in VHDL (VHSIC hardware description language), implemented using XILINX ISE 8.1 Design suite the algorithm is written in system C Language and tested in SPARTAN-3 FPGA kit by interfacing a test circuit with the PC using the RS232 cable. The test results are seen to be satisfactory. The area taken and the speed of the algorithm are also evaluated.
In this paper, we perform rigorous analysis of MANET routing protocols selected from different categories over various scenarios using a large set of performance evaluation metrics. The traffic that we model on source-destination pairs is... more
In this paper, we perform rigorous analysis of MANET routing protocols selected from different categories over various scenarios using a large set of performance evaluation metrics. The traffic that we model on source-destination pairs is the video streams that consist of varying sized data frames and the inter-packet time is very low. In this way, we can check the MANET routing protocols over varying data sets and can provide the analysis that among the existing MANET routing protocols which routing protocol is best suited for data transmission over MANETs. To analyze the behavior of various routing protocols during the data communication in MANETs, we generate simulation results over various MANET scenarios consists of varying number of nodes and source destination pairs. The simulation process is done by using the open source simulator NS-3. We generate and analyze the scenarios where the effects of data communication is evaluated and analyzed over the increase in network mobility and network data traffic. The work is helpful for the students working on the various issues on MANETs as attacks, Quality-of-Service etc to identify which protocol they should use for their work as a base routing protocol.
— Regression testing is used to ensure that bugs are fixed and new functionality introduce in a new version of a software that don't adversely affect the original functionality inherited from the previous version.Regression testing is one... more
— Regression testing is used to ensure that bugs are fixed and new functionality introduce in a new version of a software that don't adversely affect the original functionality inherited from the previous version.Regression testing is one of the most complaining activities of software development and maintenance.Unluckily, It may have feeble resources to allow for the re-execution of all test cases during regression testing. In this situation the use of test case prioritization is profitable because the best appropriate test cases are executed first. In this paper we are proposing an algorithm to prioritize test cases based on rate of fault detection and impact of fault.The proposed algorithm recognises the exhausting fault at earlier stage of the testing process.We are using an Average Percentage of Faults Detected (APFD) metric to determine the effectiveness of the new test case arrangements.
In compiler theory, the Banerjee test is a dependence test. The Banerjee test assumes that all loop indices are independent, however in reality, this is often not true. The Bannerjee test is a conservative test. That is, it will not break... more
In compiler theory, the Banerjee test is a dependence test. The Banerjee test assumes that all loop indices are independent, however in reality, this is often not true. The Bannerjee test is a conservative test. That is, it will not break a dependence that does not exist. This means that the only thing the test can guarantee is the absence of dependence. This paper proposes an innovative algorithm which allows precise determination of information about dependences and can act in situation where certain cycling limits are known.
Privacy preservation is a major concern when the application of data mining techniques to large repositories of data consists of personal, sensitive and confidential information. Singular Value Decomposition (SVD) is a matrix... more
Privacy preservation is a major concern when the application of data mining techniques to large repositories of data consists of personal, sensitive and confidential information. Singular Value Decomposition (SVD) is a matrix factorization method, which can produces perturbed data by efficiently removing unnecessary information for data mining. In this paper two hybrid methods are proposed which takes the advantage of existing techniques SVD and geometric data transformations in order to provide better privacy preservation. Reflection data perturbation and scaling data perturbation are familiar geometric data transformation methods which retains the statistical properties in the dataset. In hybrid method one, SVD and scaling data perturbation are used as a combination to obtain the distorted dataset. In hybrid method two, SVD and reflection data perturbation methods are used as a combination to obtain the distorted dataset. The experimental results demonstrated that the proposed hybrid methods are providing higher utility without breaching privacy. 1428 | P a g e A u g 1 0 , 2 0 1 3
When designing a visualisation environment for controlling the building service system in Smart Home Care in order to meet the needs of seniors, emphasis is placed not only on the ease of operation and safety of the elderly, but also on... more
When designing a visualisation environment for controlling the building service system in Smart Home Care in order to meet the needs of seniors, emphasis is placed not only on the ease of operation and safety of the elderly, but also on possible cost savings in the operation of the Smart Home Care system. The article describes a study design of potential savings of electrical energy using software developed for the efficient control of lighting to a constant level with the KNX bus system or with the use of the wireless xComfort system.
In this paper, the travelling salesman problem using genetic algorithm has been attempted. In this practical paper solution is easy and we can easily apply genetic operator in this type of problem. Complexity is both in time and space,... more
In this paper, the travelling salesman problem using genetic algorithm has been attempted. In this practical paper solution is easy and we can easily apply genetic operator in this type of problem. Complexity is both in time and space, provided size of the problem an as integer (count is infinite). The solution of the traveling salesman problem is global optimum. There are cities and given distances between them. Traveling salesman has to visit all of them. TSP main objective is to find traveling sequence of cities to minimize the traveling distance.* traverse one time*initially we select parent1 & parent2 by Roulette wheel concept. Apply one point crossover operator on parents and produce the offspring. Again we apply the mutation operator on offspring and created child. But the no. of bits (cities) will be inverted by the mutation operator, that is depended on mutation probability (pm). So one generation contain 6 individual. Then count fitness of the individuals in each generation. For the next generation (for parent1 & parent2) two individuals will be selected whose fitness is best in generation. Here we see crossover between two good solution may not always yield a better or as good a solution. Since parents are good, so the probability of the child being good is high. Every time we have to do, identity the good solution in the population and make multiple copies of the good solution.
Research Interests:
Mobile technology is becoming more popular around the world. The importance of such technology relates to its capability of allowing the user of performing many different daily basis tasks. Despite the progress made in the mobile... more
Mobile technology is becoming more popular around the world. The importance of such technology relates to its capability of allowing the user of performing many different daily basis tasks. Despite the progress made in the mobile application field, there are still some boundaries and limitations in using it. Some of these difficulties are connected directly to the Culture. Other difficulties are related to the experience in using such technology. This research aims to find out the main restrictions and obstacles which limit the use of mobile handsets as an Islamic smart Hajj and Umra Application. The research aim extends to studying the effect of cultural issues on people's use of the internet on a mobile phone. The research reported here is based on participants from the Hashemite Kingdome of Jordan. This paper addresses several important aspects such as, Culture, age, mobile human cumpoter interaction, trust,familiarity and usability. These aspect tends to answer the problem arose for this paper. The aspects also tends to solve the issues assiosiated with the topic of the paper through the understanding of each of them and addressing each aspect thoroughly within the steps of composing this work. SUBJECT CLASSIFICATION Computer Sciences and software engineering. TYPE (METHOD/APPROACH) To achieve the broad aim of this research, it was important to conduct an initial study to clarify the problem. Therefore, the first step of this research was to develop and validate a questionnaire to gain information on the problem area. The next stage is to try to address the issues generated from the results obtained from the questionnaire. This will highlight the main obstacles and barriers of using smart mobile phones in Hajj and Umra. Participants will be randomly from thoese who had performed Hajj or Umra. The minimum number of participants should not be less than 30; no upper limit is required, however. The lower limit is more important because it allows the use of the central limit theorem and the normality of data assumption at the statistical analysis stage. The experiment will be divided into three parts. In the first part, participants will be asked to fill in a questionnaire designed to understand the main problems that the proposed system will try to solve. The second part of the study will involve designing and developing the system which will be specially developed to work under a smart phone environment. Finally, participants will be asked to complete a usability questionnaire at the end of the experiment. Participants will be observed while completing each task of the experiment.
The problem of locating road junctions has received much less attention than the extraction of roads networks from high resolution aerial images. The problem of road detection has been in the minds of researchers for the last 30 years... more
The problem of locating road junctions has received much less attention than the extraction of roads networks from high resolution aerial images. The problem of road detection has been in the minds of researchers for the last 30 years where junction detection is a relatively newer problem and some interesting work in this direction has been done in the last decade. The exact localization of junctions has paramount importance in the field autonomous driving vehicles. Thus, in this paper, we present a naive but a very effective road junction detector. The detector has been tested on a number of rural images and its accuracy is very high. 1435 | P a g e A u g 1 0 , 2 0 1 3
ion: From the importance of knowledge in the speech, we knew the importance of oral exam. So in this paper we integrated BOW (Bag of Word), LSA(Latin Semantic Analysis), ASR (automatic speech recognition), zero crossing rate, and Ontology... more
ion: From the importance of knowledge in the speech, we knew the importance of oral exam. So in this paper we integrated BOW (Bag of Word), LSA(Latin Semantic Analysis), ASR (automatic speech recognition), zero crossing rate, and Ontology based approach to automate the online oral exam especially in Arabic language with take into consideration the authentication problem. Our proposal method faced many challenges in Arabic language because there isn't semanticdictionary like WordNet in English and HowNet in Chinese. Also Arabic language has complicated synonyms. Our proposal can help improving meaningfulness. Finally, the proposed method in this paper didn't forget automation thefeedbackfor determining learning disability. 5608 | P a g e J a n u a r y 2 9 , 2 0 1 5
We develop and analyze a distributed space-frequency block code-orthogonal frequency division multiplexing protocol for cooperative communications in 802.11 networks. Space frequency block codes (SFBC) are spread over OFDM subcarriers... more
We develop and analyze a distributed space-frequency block code-orthogonal frequency division multiplexing protocol for cooperative communications in 802.11 networks. Space frequency block codes (SFBC) are spread over OFDM subcarriers instead of OFDM symbols to compensate for the small coherence time. Medium access control (MAC) layer packet retransmission limit has been used as an actuator for transmit cooperative diversity initialization. Transmit diversity is provided by the relays in close proximity to source node. Closed form expressions are obtained for packet error rate (PER) and average delay for the proposed scheme in Nakagami-m fading channels. This cooperative scheme achieves lower signal-to-noise ratio (SNR) values for desired packet error rate and markedly improves the average delay per packet compared to the direct transmissions at low SNR regime. Finally, the results of computer simulations are included to demonstrate the efficacy of the proposed scheme and to verify the accuracy of analytical expressions.
The aim of this paper is to develop useful rigorous results related tothe gradient observability and sensors. The concept of gradient strategic sensors is characterized and applied to the wave equation. This emphasizes the spatial... more
The aim of this paper is to develop useful rigorous results related tothe gradient observability and sensors. The concept of gradient strategic sensors is characterized and applied to the wave equation. This emphasizes the spatial structure and location of the sensors in order that regional gradient observability can be achieved. The developed results are illustrated by many examples, Finally The reconstruct method leads to a numerical algorithm illustrated by simulations.
Maintainability is an important quality attribute and a difficult concept as it involves a number of measurements. Quality estimation means estimating maintainability of software. Maintainability is a set of attribute that bear on the... more
Maintainability is an important quality attribute and a difficult concept as it involves a number of measurements. Quality estimation means estimating maintainability of software. Maintainability is a set of attribute that bear on the effort needed to make specified modification. The main goal of this paper is to propose use of few machine learning algorithms with an objective to predict software maintainability and evaluate them. The propose models are Gaussian process regression networks (GPRN), probably approximately correct learning (PAC), Genetic algorithm (GA). This paper predicts the maintenance effort. The QUES (Quality evaluation system) dataset are used in this study. The QUES datasets contains 71 classes. To measure the maintainability, number of " CHANGE " is observed over a period of few years. We can define CHANGE as the number of lines of code which were added, deleted or modified during few year maintenance periods. After this study these machine learning algorithm was compared with few models such as GRNN (General regression neural network) model, RT (Regression tree), MARS (Multiple adaptive regression splines), SVM (Support vector machine), MLR (Multiple linear regression) models. Based on experiments, it was found that GPRN can be predicting the maintainability more accurately and precisely than prevailing models. We also include object oriented software metric to measure the software maintainability. The use of machine learning algorithms to establish the relationship between metrics and maintainability would be much better approach as these are based on quantity as well as quality.
Image subtraction operation has been frequently used for automated visual inspection of printed circuit board (PCB) defects. Even though the image subtraction operation able to detect all defects occurred on PCB, some unwanted noise could... more
Image subtraction operation has been frequently used for automated visual inspection of printed circuit board (PCB) defects. Even though the image subtraction operation able to detect all defects occurred on PCB, some unwanted noise could be detected as well. Hence, before the image subtraction operation can be applied to real images of PCB, image registration operation should be done to align a defective PCB image against a template PCB image. This study shows how the image registration operation is incorporated with a thresholding algorithm to eliminate unwanted noise. The results show that all defects occurred on real images of PCB can be correctly detected without interfere by any unwanted noise.
Face recognition is one of the important applications of image processing and it has gained significant attention in wide range of law enforcement areas in which security is of prime concern. Although the existing automated machine... more
Face recognition is one of the important applications of image processing and it has gained significant attention in wide range of law enforcement areas in which security is of prime concern. Although the existing automated machine recognition systems have certain level of maturity but their accomplishments are limited due to real time challenges. Face recognition systems are impressively sensitive to appearance variations due to lighting, expression and aging. The major metric in modeling the performance of a face recognition system is its accuracy of recognition. This paper proposes a novel method which improves the recognition accuracy as well as avoids face datasets being tampered through image splicing techniques. Proposed method uses a non-statistical procedure which avoids training step for face samples thereby avoiding generalizability problem which is caused due to statistical learning procedure. This proposed method performs well with images with partial occlusion and images with lighting variations as the local patch of the face is divided into several different patches. The performance improvement is shown considerably high in terms of recognition rate and storage space by storing train images in compressed domain and selecting significant features from superset of feature vectors for actual recognition.
Graphical User Interface (GUI) is considered to be an essential part in any web applications development. Aspect-Oriented Component Engineering (AOCE) is new approach for developing more and higher quality reusable and adaptable software... more
Graphical User Interface (GUI) is considered to be an essential part in any web applications development. Aspect-Oriented Component Engineering (AOCE) is new approach for developing more and higher quality reusable and adaptable software or web applications components. AOCE uses the idea of providing and requiring services. Adaptable user interface for AOCE based development has not yet been considered to web applications. Simple and easy user interface facilitate users by which application or web interface can be operated effectively. The purpose of this study is to discuss popular user interfaces and suggestions for adaptable GUI designing. The example of adaptation at different levels includes architecture, presentation, extension and composition. Furthermore the AOCE common systemic aspects are discussed for web interfaces adaptivity.
Mobile WiMAX is IEEE 802.16e standard established for mobile broadband wireless access (BWA). Mobility in WiMAX system is an important issue when the mobile station (MS) moves and be handover between base stations (BSs). This issue causes... more
Mobile WiMAX is IEEE 802.16e standard established for mobile broadband wireless access (BWA). Mobility in WiMAX system is an important issue when the mobile station (MS) moves and be handover between base stations (BSs). This issue causes an unnecessary neighboring BS scanning and association, handover delay, and MAC overhead which may affect real-time applications. The MS movement direction prediction (MMDP) based MS scanning is used to reduce the number of scanning's required in mobile WiMAX handover process. In this paper with reduction in number of scanning's we propose to reduce the scanning delay required for evaluated two best target BSs by using concurrent scanning process. Network issues like ideal sectors in WiMAX 2 (IEEE 802.16m), network congestion and fast change in RSS are important parameters which can affect the QoS. With this proposed model we are able to provide the high quality hand over support in mobile WiMAX wherein we are able to solve the above stated network problems. Here on using the concurrent scanning process the authors are able to reduce the scanning delay by 40%. As a part of this paper, network layer parameters called routing protocols are analyzed for hand over support. This analysis shows an efficient routing protocol for real time application and the best effort service. This reviles that fisheye protocol is better to use for smaller network and LANMAR for the larger networks in real time application and IERP protocol is good for best effort services.
Machine translations have been used for nearly 50 years, and although accuracy steadily improves, results do not often attain the standard set for human interpretation. Some people can understand poor translations better than others, but... more
Machine translations have been used for nearly 50 years, and although accuracy steadily improves, results do not often attain the standard set for human interpretation. Some people can understand poor translations better than others, but little is known about why this is so. Here, we show that language fluency was not a major factor in the ability of a human reader to comprehend a passage of text translated by computer, but prior topic knowledge could be.
The modern telecommunication systems require very high transmission rates, in this context, the problem of channels identification is a challenge major. The use of blind techniques is a great interest to have the best compromise between a... more
The modern telecommunication systems require very high transmission rates, in this context, the problem of channels identification is a challenge major. The use of blind techniques is a great interest to have the best compromise between a suitable bit rate and quality of the information retrieved. In this paper, we are interested to learn the algorithms for blind channel identification. We propose a hybrid method that performs a trade-off between two existing methods in order to improve the channel estimation.
The motivation for the present study is derived from the fact that time mangaement is an integral part of good engineering practice. The present study investigated the quantification of the required computation time using two nonlinear... more
The motivation for the present study is derived from the fact that time mangaement is an integral part of good engineering practice. The present study investigated the quantification of the required computation time using two nonlinear and harmonically excited oscillators (Pendulum and Duffing) as case studies. Simulations with personal computer were effected for Runge-Kutta schemes (RK2, RK3, RK4, RK5, RK5M) and one blend (RKB) over thirty five thousand and ten excitation periods consisting the unsteady and steady solutions. The need for validation of the developed FORTRAN90 codes by comparing Poincare results with their conterpart from the literature informed the choice of simulation parameters. However, the simulation time was monitored at three lengths of excitation period (15000, 25000 and 35000) using the current time subroutine call command. The validation Poincaré results obtained for all the schemes including RKB compare well with the counterpart available in the literature for both Pendulum and Duffing. The actual computation time increases with increasing order of scheme, but suffered a decrease for the blended scheme. The diffencerence in computation time required between RK5 and RK5M is negligible for all studied cases. The actual computational time for Duffing (5-33seconds) remain consistently higher for corresponding Pendulum (3-23seconds) with difference (2-10seconds). Interestingly, the quantitative difference between the corresponding normalised computation time for systems and schemes is insignificant. It is insensitive to systems and schemes and formed a simple average ratio{ (1.0) : (1.5) : (2.0) : (3.1) : (3.1) : (2.4)} for RK2, RK3, RK4, RK5, RK5M and RKB respectively. It is concluded that the end justified the means provided that computation accuracy is assured using the higher order scheme (with higher computational time ratio).
The emergence of multi-drug resistant (MDR) strains of Mycobacterium tuberculosis is the main reason why tuberculosis (TB) continues to be a major health problem worldwide. It is urgent to discover novel anti-mycobacterial agents based on... more
The emergence of multi-drug resistant (MDR) strains of Mycobacterium tuberculosis is the main reason why tuberculosis (TB) continues to be a major health problem worldwide. It is urgent to discover novel anti-mycobacterial agents based on new drug targets for the treatment of TB, especially MDR-TB. Tryptophan biosynthetic pathway, which is essential for the survival of M. tuberculosis and meanwhile absent in mammals, provides potential anti-TB drug targets. One of the promising drug targets in this pathway is anthranilate synthase component I (TrpE), whose role is to catalyze the conversion of chorismate to anthranilate using ammonia as amino source. Anthranilate synthase is an interesting target enzyme for antimicrobial activity due to its presence in microorganisms for the synthesis of the essential amino acid tryptophan. In the present study three compounds Cannabigerolic acid, cannabinolic acid and adhumulone from Cannabis sativa have been used for insilio docking studies. Inhibitory studies (invitro) of these compounds against Microorganism have reported earlier. Our approach is to find out the compounds inhibiting the AS1 of MTB by insilico docking and also find out compounds having similar pharmacophore characters from ZINC database so that those compounds can be procured of synthesized in laboratory and used for AS1 inhibitor studies. This study shows that AS can be used as a target enzyme to investigate the mode of action of our compounds in MTB.
This paper focuses on the application of both Balanced Scorecard (BSC) conceptual framework and Multi-criteria Decision Analysis (MCDA) a tool for Scenario Planning as a tool for Strategic Decision Thinking, on hazard risk management... more
This paper focuses on the application of both Balanced Scorecard (BSC) conceptual framework and Multi-criteria Decision Analysis (MCDA) a tool for Scenario Planning as a tool for Strategic Decision Thinking, on hazard risk management within Limpopo River Basin. We discuss best practices in four main domains areas, namely Politic (as pool for country raking worldwide), Economic, Social Development and Technology and how they can contribute to build a viable scenario for the management of the basin.

And 48 more