The biggest difference between video-based action recognition and image-based action recognition ... more The biggest difference between video-based action recognition and image-based action recognition is that the former has an extra feature of time dimension. Most methods of action recognition based on deep learning adopt: (1) using 3D convolution to modeling the temporal features; (2) introducing an auxiliary temporal feature, such as optical flow. However, the 3D convolution network usually consumes huge computational resources. The extraction of optical flow requires an extra tedious process with an extra space for storage, and is usually modeled for short-range temporal features. To construct the temporal features better, in this paper we propose a multi-scale attention spatial–temporal features network based on SSD, by means of piecewise on long range of the whole video sequence to sparse sampling of video, using the self-attention mechanism to capture the relation between one frame and the sequence of frames sampled on the entire range of video, making the network notice the representative frames on the sequence. Moreover, the attention mechanism is used to assign different weights to the inter-frame relations representing different time scales, so as to reasoning the contextual relations of actions in the time dimension. Our proposed method achieves competitive performance on two commonly used datasets: UCF101 and HMDB51.
2018 8th International Conference on Cloud Computing, Data Science & Engineering (Confluence)
The growing popularity of social networking has brought about a rapid increase in the amount of d... more The growing popularity of social networking has brought about a rapid increase in the amount of data produced by users. Social networking web sites such as Twitter and Facebook provide a platform to millions of users to express their views about different services and products. Twitter is a great source of data and sentiment analysis can be used to refine this data into information. The proposed system performs sentiment analysis on Twitter data. The tweets data forms a dataset that cannot be handled by computing tools and techniques that have been traditionally used. Hadoop is the platform capable of handling such large datasets. Hence proposed system uses the Hadoop ecosystem for analyzing the sentiment of users. The classification is performed using a trained model from Stanford CoreNLP. With the help of Bigdata and Hadoop the proposed system analyzes the input text and classifies as per the provided labels. The existing systems using lexical techniques or machine learning algorithms have lower performance metrics. The proposed system overcomes these problems by using a combination of Hadoop for handling huge data and CoreNLP to augment the language processing capabilities of the system. The proposed system shows better results in comparison to existing systems.
COVID-19 is a real problem, and it is spreading like a forest fire. The data of this pandemic is ... more COVID-19 is a real problem, and it is spreading like a forest fire. The data of this pandemic is time-series data. The models that can handle time-series data are the ARIMA model, the Holt-Winter model, the SARIMAX model, polynomial regression, and LSTM. These models have been applied to COVID-19 data, and the results are discussed with significance. This chapter used three types of datasets. The primary dataset is the 2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository by Johns Hopkins CSSE (https://github.com/CSSEGISandData/COVID-19). The second dataset is used from Worldometers website (https://www.worldometers.info/), and third is from Kaggle. The SARIMAX model produced 0.236 as the MAPE value, while the Holt-Winter model produced 0.249. The polynomial regression model shows that the accuracy of the model approximated for the tenth day is 85% in the prediction of the number of affected cases and the number of deaths. The LSTM model used the ADAM optimizer and calculated the root mean square error. The prediction error for training is 6.45, and the calculated overall error is 5.34.
Twitter is a news and social networking site where people around the world post their blogs and s... more Twitter is a news and social networking site where people around the world post their blogs and share their feeling, point of view, and comments regarding any communication or about any latest movie, etc. Thus, Twitter generates a massive quantity of Twitter data every day. This data is real time, which is being used in the proposed work for implementing a “movie recommendation system.” To enhance the performance of the framework, sentimental analysis is also being applied to the data. Nowadays, the recommendation system is also an essential tool for online businesses and used by various e-commerce sites, music applications, entertainment sites, etc. This work proposed a movie recommendation system for the movie domain which is developed using real-time multilingual tweets. These tweets are obtained from Twitter API using the LinqToTwitter Library. Sentimental analysis is also being performed on tweets. In this work, multilingual and real-time tweets are considered. These tweets are translated into the target language using Google Translate API. The proposed work used the Stanford library for preprocessing, and RNN is used for classifying the tweets. The tweets are classified as positive, negative, and neutral tweets. Preprocessing of the tweets is done to remove unwanted words, URLs, emoticons, etc. Finally, based on the classification, the movie is suggested to the user. This proposed work is better than the current practices as the implementation is being done on real-time tweets, and sentimental analysis is also being performed to get better results. This system is achieving 91.67% accuracy, 92% precision, 90.2% recall, and 90.98% f-measure.
In the developing world it is very clear that as the software demand is increasing, the software ... more In the developing world it is very clear that as the software demand is increasing, the software cost is also increasing. The main problem of software industries is to provide the software at feasible cost which fulfills the user requirements. The modern software development strategy requires high development cost, large manpower, high risk maintenance cost and large time to complete the software. Component Based Software Engineering (CBSE) is an approach to develop a software from reusable components by selecting existing components and then assembles them together rather than developing from scratch. Thus it is an approach to reduce the cost of software development, maintenance, testing and also to reduce the time taken in software development process. The main factors of component based development (COTS development) are time and cost saving. But there are many challenges and risks to select a component. The objective of this research is to select existing components from compone...
Intelligent interface, to enhance efficient interactions between user and databases, is the need ... more Intelligent interface, to enhance efficient interactions between user and databases, is the need of the database applications. Databases must be intelligent enough to make the accessibility faster. However, not every user familiar with the Structured Query Language (SQL) queries as they may not aware of structure of the database and they thus require to learn SQL. So, non-expert users need a system to interact with relational databases in their natural language such as English. For this, Database Management System (DBMS) must have an ability to understand Natural Language (NL). In this research, an intelligent interface is developed using semantic matching technique which translates natural language query to SQL using set of production rules and data dictionary. The data dictionary consists of semantics sets for relations and attributes. A series of steps like lower case conversion, tokenization, speech tagging, database element and SQL element extraction is used to convert Natural ...
In the path to sustainable development, it is very necessary to use technology in the right way. ... more In the path to sustainable development, it is very necessary to use technology in the right way. Today's world involves the usage of various energy-efficient devices in our day-to-day lives that leads to optimal utilization of energy. In this research work, the design and implementation of a smart home system model is proposed that can control all the electrical equipment as well as monitor the usage of every device being used in the smart home. The system will use the combination of artificial intelligence and Internet of Things technologies. This system will be useful for all individuals in their daily life to find comfort. The system not only optimizes the energy usage, but instead it also stands by the emergence of equipment that makes it a complete smart home package. The proposed system will monitor all the inputs and outputs throughout the house. These inputs and outputs may be a person, electricity, or water supply. This system will help in improving the current standard...
Nowadays, the Internet of Things (IoT) and artificial intelligence (AI) is the emerging field in ... more Nowadays, the Internet of Things (IoT) and artificial intelligence (AI) is the emerging field in which researchers are still finding new methods and techniques to reduce human efforts. This chapter contains the basic introduction of the AI and IoT systems. The seven-layer architecture of the IoT frameworks is discussed with the functioning of the individual layers. The elementary facility of things and complete operation of the particular layer. In this chapter, we have discussed the relationship between AI and the IoT. There are various real-time applications of IoT with AI. Some of the uses of AI and IoT are also being discussed in the chapter.
In the modern computing environment, smart cards are being used extensively, which are intended t... more In the modern computing environment, smart cards are being used extensively, which are intended to authenticate a user with the system or server. Owing to the constrictions of computational resources, smart card-based systems require an effective design and efficient security scheme. In this paper, a smart card authentication protocol based on the concept of elliptic curve signcryption has been proposed and developed, which provides security attributes, including confidentiality of messages, non-repudiation, the integrity of messages, mutual authentication, anonymity, availability, and forward security. Moreover, the analysis of security functionalities shows that the protocol developed and explained in this paper is secure from password guessing attacks, user and server impersonation, replay attacks, de-synchronization attacks, insider attacks, known key attacks, and man-in-the-middle attacks. The results have demonstrated that the proposed smart card security protocol reduces the ...
The biggest difference between video-based action recognition and image-based action recognition ... more The biggest difference between video-based action recognition and image-based action recognition is that the former has an extra feature of time dimension. Most methods of action recognition based on deep learning adopt: (1) using 3D convolution to modeling the temporal features; (2) introducing an auxiliary temporal feature, such as optical flow. However, the 3D convolution network usually consumes huge computational resources. The extraction of optical flow requires an extra tedious process with an extra space for storage, and is usually modeled for short-range temporal features. To construct the temporal features better, in this paper we propose a multi-scale attention spatial–temporal features network based on SSD, by means of piecewise on long range of the whole video sequence to sparse sampling of video, using the self-attention mechanism to capture the relation between one frame and the sequence of frames sampled on the entire range of video, making the network notice the representative frames on the sequence. Moreover, the attention mechanism is used to assign different weights to the inter-frame relations representing different time scales, so as to reasoning the contextual relations of actions in the time dimension. Our proposed method achieves competitive performance on two commonly used datasets: UCF101 and HMDB51.
2018 8th International Conference on Cloud Computing, Data Science & Engineering (Confluence)
The growing popularity of social networking has brought about a rapid increase in the amount of d... more The growing popularity of social networking has brought about a rapid increase in the amount of data produced by users. Social networking web sites such as Twitter and Facebook provide a platform to millions of users to express their views about different services and products. Twitter is a great source of data and sentiment analysis can be used to refine this data into information. The proposed system performs sentiment analysis on Twitter data. The tweets data forms a dataset that cannot be handled by computing tools and techniques that have been traditionally used. Hadoop is the platform capable of handling such large datasets. Hence proposed system uses the Hadoop ecosystem for analyzing the sentiment of users. The classification is performed using a trained model from Stanford CoreNLP. With the help of Bigdata and Hadoop the proposed system analyzes the input text and classifies as per the provided labels. The existing systems using lexical techniques or machine learning algorithms have lower performance metrics. The proposed system overcomes these problems by using a combination of Hadoop for handling huge data and CoreNLP to augment the language processing capabilities of the system. The proposed system shows better results in comparison to existing systems.
COVID-19 is a real problem, and it is spreading like a forest fire. The data of this pandemic is ... more COVID-19 is a real problem, and it is spreading like a forest fire. The data of this pandemic is time-series data. The models that can handle time-series data are the ARIMA model, the Holt-Winter model, the SARIMAX model, polynomial regression, and LSTM. These models have been applied to COVID-19 data, and the results are discussed with significance. This chapter used three types of datasets. The primary dataset is the 2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository by Johns Hopkins CSSE (https://github.com/CSSEGISandData/COVID-19). The second dataset is used from Worldometers website (https://www.worldometers.info/), and third is from Kaggle. The SARIMAX model produced 0.236 as the MAPE value, while the Holt-Winter model produced 0.249. The polynomial regression model shows that the accuracy of the model approximated for the tenth day is 85% in the prediction of the number of affected cases and the number of deaths. The LSTM model used the ADAM optimizer and calculated the root mean square error. The prediction error for training is 6.45, and the calculated overall error is 5.34.
Twitter is a news and social networking site where people around the world post their blogs and s... more Twitter is a news and social networking site where people around the world post their blogs and share their feeling, point of view, and comments regarding any communication or about any latest movie, etc. Thus, Twitter generates a massive quantity of Twitter data every day. This data is real time, which is being used in the proposed work for implementing a “movie recommendation system.” To enhance the performance of the framework, sentimental analysis is also being applied to the data. Nowadays, the recommendation system is also an essential tool for online businesses and used by various e-commerce sites, music applications, entertainment sites, etc. This work proposed a movie recommendation system for the movie domain which is developed using real-time multilingual tweets. These tweets are obtained from Twitter API using the LinqToTwitter Library. Sentimental analysis is also being performed on tweets. In this work, multilingual and real-time tweets are considered. These tweets are translated into the target language using Google Translate API. The proposed work used the Stanford library for preprocessing, and RNN is used for classifying the tweets. The tweets are classified as positive, negative, and neutral tweets. Preprocessing of the tweets is done to remove unwanted words, URLs, emoticons, etc. Finally, based on the classification, the movie is suggested to the user. This proposed work is better than the current practices as the implementation is being done on real-time tweets, and sentimental analysis is also being performed to get better results. This system is achieving 91.67% accuracy, 92% precision, 90.2% recall, and 90.98% f-measure.
In the developing world it is very clear that as the software demand is increasing, the software ... more In the developing world it is very clear that as the software demand is increasing, the software cost is also increasing. The main problem of software industries is to provide the software at feasible cost which fulfills the user requirements. The modern software development strategy requires high development cost, large manpower, high risk maintenance cost and large time to complete the software. Component Based Software Engineering (CBSE) is an approach to develop a software from reusable components by selecting existing components and then assembles them together rather than developing from scratch. Thus it is an approach to reduce the cost of software development, maintenance, testing and also to reduce the time taken in software development process. The main factors of component based development (COTS development) are time and cost saving. But there are many challenges and risks to select a component. The objective of this research is to select existing components from compone...
Intelligent interface, to enhance efficient interactions between user and databases, is the need ... more Intelligent interface, to enhance efficient interactions between user and databases, is the need of the database applications. Databases must be intelligent enough to make the accessibility faster. However, not every user familiar with the Structured Query Language (SQL) queries as they may not aware of structure of the database and they thus require to learn SQL. So, non-expert users need a system to interact with relational databases in their natural language such as English. For this, Database Management System (DBMS) must have an ability to understand Natural Language (NL). In this research, an intelligent interface is developed using semantic matching technique which translates natural language query to SQL using set of production rules and data dictionary. The data dictionary consists of semantics sets for relations and attributes. A series of steps like lower case conversion, tokenization, speech tagging, database element and SQL element extraction is used to convert Natural ...
In the path to sustainable development, it is very necessary to use technology in the right way. ... more In the path to sustainable development, it is very necessary to use technology in the right way. Today's world involves the usage of various energy-efficient devices in our day-to-day lives that leads to optimal utilization of energy. In this research work, the design and implementation of a smart home system model is proposed that can control all the electrical equipment as well as monitor the usage of every device being used in the smart home. The system will use the combination of artificial intelligence and Internet of Things technologies. This system will be useful for all individuals in their daily life to find comfort. The system not only optimizes the energy usage, but instead it also stands by the emergence of equipment that makes it a complete smart home package. The proposed system will monitor all the inputs and outputs throughout the house. These inputs and outputs may be a person, electricity, or water supply. This system will help in improving the current standard...
Nowadays, the Internet of Things (IoT) and artificial intelligence (AI) is the emerging field in ... more Nowadays, the Internet of Things (IoT) and artificial intelligence (AI) is the emerging field in which researchers are still finding new methods and techniques to reduce human efforts. This chapter contains the basic introduction of the AI and IoT systems. The seven-layer architecture of the IoT frameworks is discussed with the functioning of the individual layers. The elementary facility of things and complete operation of the particular layer. In this chapter, we have discussed the relationship between AI and the IoT. There are various real-time applications of IoT with AI. Some of the uses of AI and IoT are also being discussed in the chapter.
In the modern computing environment, smart cards are being used extensively, which are intended t... more In the modern computing environment, smart cards are being used extensively, which are intended to authenticate a user with the system or server. Owing to the constrictions of computational resources, smart card-based systems require an effective design and efficient security scheme. In this paper, a smart card authentication protocol based on the concept of elliptic curve signcryption has been proposed and developed, which provides security attributes, including confidentiality of messages, non-repudiation, the integrity of messages, mutual authentication, anonymity, availability, and forward security. Moreover, the analysis of security functionalities shows that the protocol developed and explained in this paper is secure from password guessing attacks, user and server impersonation, replay attacks, de-synchronization attacks, insider attacks, known key attacks, and man-in-the-middle attacks. The results have demonstrated that the proposed smart card security protocol reduces the ...
Uploads
Papers by Arun Solanki