Skip to main content
  • Bangladesh
In the literature, automated Bangla article classification has been studied, where several supervised learning models have been proposed by utilizing a large textual data corpus. Despite several comprehensive textual datasets are... more
In the literature, automated Bangla article classification has been studied, where several supervised learning models have been proposed by utilizing a large textual data corpus. Despite several comprehensive textual datasets are available for different languages, a few small datasets are curated on Bangla language. As a result, a few works address Bangla document classification problem, and due to the lack of enough training data, these approaches could not able to learn sophisticated supervised learning model. In this work, we curated a large dataset of Bangla articles from different news portals, which contains around 3,76,226 articles. This huge diverse dataset helps us to train several supervised learning models by utilizing a set of sophisticated textual features, such as word embeddings, TF-IDF. In this works, our learning model shows promising performance on our curated dataset, compared to state-of-the-art works in Bangla article classification. Furthermore, we deployed our proposed Bangla content classifier as a web application: bard2018.pythonanywhere.com and the video demo of this application is available here: bit.lylBARD_ VIDEO_DEMO. Additionally, we open-sourced the BARD dataset(bit.lyIBARD_DATASET) and source code of this work(bit.lvlBARD SC)’
With the advancement of Information technologies and applications, a copious amount of data is generated, which attracts both the research community to utilize this information for extracting knowledge and the industry for developing the... more
With the advancement of Information technologies and applications, a copious amount of data is generated, which attracts both the research community to utilize this information for extracting knowledge and the industry for developing the knowledge-based system. Visualization of data, pattern mining from datasets and analyzing data drift for the different features are three highly used applications of machine learning and data science fields. A generic web-based tool integrated with such features will provide prodigious support for preprocessing the dataset and thus extracting accurate information. In this work, we propose such a data visualization tool, named VIM, which is a web-based comprehensive tool for generic data visualization, data preprocessing and mining suitable knowledge with drift analysis of data. Given a dataset, it can envisage the distribution of data with convenient statistical diagrams for different selected features. Moreover, users can employ VIM to generate association rules by selecting multiple features. We have developed VIM using Python Django framework and GraphLab library. We have deployed this tool to make this publicly usable, which can be accessed at http://210.4.73.237:9999/
With the advancement of Information technologies and applications, a copious amount of data is generated, which attracts both the research community to utilize this information for extracting knowledge and the industry for developing the... more
With the advancement of Information technologies and applications, a copious amount of data is generated, which attracts both the research community to utilize this information for extracting knowledge and the industry for developing the knowledge-based system. Visualization of data, pattern mining from datasets and analyzing data drift for the different features are three highly used applications of machine learning and data science fields. A generic web-based tool integrated with such features will provide prodigious support for preprocessing the dataset and thus extracting accurate information. In this work, we propose such a data visualization tool, named VIM, which is a web-based comprehensive tool for generic data visualization, data preprocessing and mining suitable knowledge with drift analysis of data. Given a dataset, it can envisage the distribution of data with convenient statistical diagrams for different selected features. Moreover, users can employ VIM to generate ass...
Computer vision techniques have been frequently applied to pedestrian and cyclist detection for the purpose of providing sensing capabilities to autonomous vehicles, and delivery robots among other use cases. Most current computer vision... more
Computer vision techniques have been frequently applied to pedestrian and cyclist detection for the purpose of providing sensing capabilities to autonomous vehicles, and delivery robots among other use cases. Most current computer vision approaches for pedestrian and cyclist detection utilize RGB data alone. However, RGB-only systems struggle in poor lighting and weather conditions, such as at night, or during fog or precipitation, often present in pedestrian detection contexts. Thermal imaging presents a solution to these challenges as its quality is independent of time of day and lighting conditions. The use of thermal imaging input, such as those in the Long Wave Infrared (LWIR) range, is thus beneficial in computer vision models as it allows the detection of pedestrians and cyclists in variable illumination conditions that would pose challenges for RGB-only detection systems. In this paper, we present a pedestrian and cyclist detection method via thermal imaging using a deep neu...
With the increasing software developer community, questions answering (QA) sites, such as StackOverflow, have been gaining its popularity. Hence, in recent years, millions of questions and answers are posted in StackOverflow. As a result,... more
With the increasing software developer community, questions answering (QA) sites, such as StackOverflow, have been gaining its popularity. Hence, in recent years, millions of questions and answers are posted in StackOverflow. As a result, it takes an enormous amount of effort to find out the suitable answer to a question. Luckily, StackOverflow allows their community members to label an answer as an accepted answer. However, in the most of the questions, answers are not marked as accepted answers. Therefore, there is a need to build a recommender system which can accurately suggest the most suitable answers to the questions. Contrary to the existing systems, in this work, we have utilized the textual features of the answers’ comments with the other metadata of the answers to building the recommender system for predicting the accepted answer. In the experimentation, our system has achieved 89.7% accuracy to predict the accepted answer by utilizing the textual metadata as a feature. W...
Recognizing human activities is one of the crucial capabilities that a robot needs to have to be useful around people. Although modern robots are equipped with various types of sensors, human activity recognition (HAR) still remains a... more
Recognizing human activities is one of the crucial capabilities that a robot needs to have to be useful around people. Although modern robots are equipped with various types of sensors, human activity recognition (HAR) still remains a challenging problem, particularly in the presence of noisy sensor data. In this work, we introduce a multimodal graphical attention-based HAR approach, called Multi-GAT, which hierarchically learns complementary multimodal features. We develop a multimodal mixture-of-experts model to disentangle and extract salient modality-specific features that enable feature interactions. Additionally, we introduce a novel message-passing based graphical attention approach to capture cross-modal relation for extracting complementary multimodal features. The experimental results on two multimodal human activity datasets suggest that Multi-GAT outperformed state-of-the-art HAR algorithms across all datasets and metrics tested. Finally, the experimental results with noisy sensor data indicate that Multi-GAT consistently outperforms all the evaluated baselines. The robust performance suggests that Multi-GAT can enable seamless human-robot collaboration in noisy human environments.
With the advancement of technology, devices, which are considered non-traditional in terms of internet capabilities, are now being embedded in microprocessors to communicate and these devices are known as IoT devices. This technology has... more
With the advancement of technology, devices, which are considered non-traditional in terms of internet capabilities, are now being embedded in microprocessors to communicate and these devices are known as IoT devices. This technology has enabled household devices to have the ability to communicate with the internet and a network comprising of such device can create a home IoT network. Such IoT devices are resource constrained and lack high-level security protocols. Thus, security becomes a major issue for such network systems. One way to secure the networks is through reliable authentication protocols and data transfer mechanism. As the household devices are controllable by the users remotely, they are accessed over the internet. Therefore, there should also be a method to make the communication over the internet between IoT devices and the users more secured. This paper proposes a two-phase authentication protocol for authentication purposes and a VPN based secure channel creation ...
Glycation is chemical reaction by which sugar molecule bonds with a protein without the help of enzymes. This is often cause to many diseases and therefore the knowledge about glycation is very important. In this paper, we present... more
Glycation is chemical reaction by which sugar molecule bonds with a protein without the help of enzymes. This is often cause to many diseases and therefore the knowledge about glycation is very important. In this paper, we present iProtGly-SS, a protein lysine glycation site identification method based on features extracted from sequence and secondary structural information. In the experiments, we found the best feature groups combination: Amino Acid Composition, Secondary Structure Motifs and Polarity. We used support vector machine classifier to train our model and used an optimal set of features using a group based forward feature selection technique. On standard benchmark datasets, our method is able to significantly outperform existing methods for glycation prediction. A web server for iProtGly-SS is implemented and publicly available to use: http://brl.uiu.ac.bd/iprotgly-ss/. This article is protected by copyright. All rights reserved.
In order to stay up to date with world issues and cutting-edge technologies, the newspaper plays a crucial role. However, collecting news is not a very easy task. Currently, news publishers are collecting news from their correspondents... more
In order to stay up to date with world issues and cutting-edge technologies, the newspaper plays a crucial role. However, collecting news is not a very easy task. Currently, news publishers are collecting news from their correspondents through social networks, email, phone call, fax etc. and sometimes they buy news from the agencies. However, the existing news sharing networks may not provide security for data integrity and any third party may obstruct the regular flow of news sharing. Moreover, the existing news schemes are very vulnerable in case of disclosing the identity. Therefore, a universal platform is needed in the era of globalization where anyone can share and trade news from anywhere in the world securely, without the interference of third-party, and without disclosing the identity of an individual. Recently, blockchain has gained popularity because of its security mechanism over data, identity, etc. Blockchain enables a distributed way of managing transactions where each participant of the network holds the same copy of the transactions. Therefore, with the help of pseudonymity, fault-tolerance, immutability and the distributed structure of blockchain, a scheme (termed as NEWSTRADCOIN) is presented in this paper in which not only news can be shared securely but also anyone can earn money by selling news. The proposed NEWSTRADCOIN can provide a universal platform where publishers can directly obtain news from news-gatherers in a secure way by maintaining data integrity, without experiencing the interference of a third-party, and without disclosing the identity of the news gatherer and publishers.
—Mobile Cloud Computing (MCC) improves the performance of a mobile application by executing it at a resourceful cloud server that can minimize execution time compared to a resource-constrained mobile device. Virtual Machine (VM) migration... more
—Mobile Cloud Computing (MCC) improves the performance of a mobile application by executing it at a resourceful cloud server that can minimize execution time compared to a resource-constrained mobile device. Virtual Machine (VM) migration in MCC brings cloud resources closer to a user so as to further minimize the response time of an offloaded application. Such resource migration is very effective for interactive and real-time applications. However, the key challenge is to find an optimal cloud server for migration that offers the maximum reduction in computation time. In this paper, we propose a Genetic Algorithm (GA) based VM migration model, namely GAVMM, for heterogeneous MCC system. In GAVMM, we take user mobility and load of the cloud servers into consideration to optimize the effectiveness of VM migration. The goal of GAVMM is to select the optimal cloud server for a mobile VM and to minimize the total number of VM migrations, resulting in a reduced task execution time. Additionally, we present a thorough numerical evaluation to investigate the effectiveness of our proposed model compared to the state-of-the-art VM migration policies.
Maximum target coverage with minimum number of sensor nodes, known as MCMS problem, is an important problem in Directional Sensor Networks (DSNs). Existing solutions allow individual sensor nodes to determine the sensing direction for... more
Maximum target coverage with minimum number of sensor nodes, known as MCMS problem, is an important problem in Directional Sensor Networks (DSNs). Existing solutions allow individual sensor nodes to determine the sensing direction for maximum target coverage which causes sensing coverage re­ dundancy as well as high energy consumption. Gathering nodes into clusters might provide a better solution to this problem. In this paper, we have designed distributed clustering and target coverage algorithms to address the problem in an energy-efficient way. Our extensive simulation study shows that our system outperforms a number of state-of-the-art approaches.
Maximum target coverage with minimum number of sensor nodes, known as an MCMS problem, is an important problem in directional sensor networks (DSNs). For guaranteed coverage and event reporting, the underlying mechanism must ensure that... more
Maximum target coverage with minimum number of sensor nodes, known as an MCMS problem, is an important problem in directional sensor networks (DSNs). For guaranteed coverage and event reporting, the underlying mechanism must ensure that all targets are covered by the sensors and the resulting network is connected. Existing solutions allow individual sensor nodes to determine the sensing direction for maximum target coverage which produces sensing coverage redundancy and much overhead. Gathering nodes into clusters might provide a better solution to this problem. In this paper, we have designed distributed clustering and target coverage algorithms to address the problem in an energy-efficient way. To the best of our knowledge, this is the first work that exploits cluster heads to determine the active sensing nodes and their directions for solving target coverage problems in DSNs. Our extensive simulation study shows that our system outperforms a number of state-of-the-art approaches.