Skip to main content
We consider joint beamforming and relay motion control in mobile relay beamforming networks, operating in a spatio-temporally varying channel environment. A time slotted approach is adopted, where in each slot, the relays implement... more
We consider joint beamforming and relay motion control in mobile relay beamforming networks, operating in a spatio-temporally varying channel environment. A time slotted approach is adopted, where in each slot, the relays implement optimal beamforming and estimate their optimal positions for the next slot. We place the problem of relay motion control in a sequential decision-making framework. We employ Reinforcement Learning (RL) to guide the relay motion, with the goal of maximizing the cumulative Signal-to-Interference+Noise Ratio (SINR) at the destination. First, we present a model based RL approach, which predictively estimates the SINR and accordingly determines the relay motion, based on partial knowledge of the channel model along with channel measurements at the current relay positions. Second, we propose a model-free deep Q-learning approach, which does not rely on channel models. For the deep Q-learning approach, we propose two modified Multilayer Perceptron Neural Networks (MLPs) for approximating the value function Q. The first modification applies a Fourier feature mapping of the state before passing it through the MLP. The second modification constitutes a different neural network architecture that uses sinusoids as activations between layers. Both modifications enable the MLP to better learn the high frequency value function and have a profound effect on convergence speed and SINR performance. Finally, we conduct a comparative analysis of all the presented approaches and provide insights on advantages and drawbacks.
The paper considers the discrete 2D motion control of mobile relays implementing distributed beamforming in a spatiotemporally correlated channel environment. A time-slotted scenario is considered where the relays implement optimal... more
The paper considers the discrete 2D motion control of mobile relays implementing distributed beamforming in a spatiotemporally correlated channel environment. A time-slotted scenario is considered where the relays implement optimal beamforming, while standing still, then estimate the optimal positions for the next slot and move to those selected positions to beamform again. The goal is to maximize the cumulative Signal-To-Interference+Noise Ratio (SINR) at the destination. We employ double deep Q learning to construct the motion policies. The method is completely model free and agnostic of channels statistics. A Fourier feature mapping is applied on the state before passing it to the Q networks, which enables the learning of a richer representation of the Q function in terms of its frequency spectrum. We propose a strategy to bias the neural network gradient updates. In the initial stages of training, our approach induces bias towards easier experiences (experiences that correspond to relatively low loss) from a relay trajectory, while later, it gradually places the bias towards harder examples. This bias transition is controlled by a temperature parameter, that we change through the course of training. The proposed approach provides significant improvement both in reward accumulation and speed of convergence.
The paper addresses motion control of mobile relays implementing cooperative beamforming in a time- and space- varying channel environment. The relays move in a time-slotted fashion and movement is confined within a 2D rectangular plane,... more
The paper addresses motion control of mobile relays implementing cooperative beamforming in a time- and space- varying channel environment. The relays move in a time-slotted fashion and movement is confined within a 2D rectangular plane, discretized on a fine grid. In each time slot, the relays optimally beamform to maximize the Signal-to-Interference+Noise Ratio (SINR) at the destination, subject to power constraints, and determine their optimal next slot positions to which they move by the end of the slot. Prior works have assumed the availability of statistical channel models, which were used to predictively compute the optimal next slot relay positions. In this paper, we propose a novel, model-free, deep Q learning approach to govern relay motion policies, which drops all assumptions on channel model statistics and allows relays to learn solely from experience. Due to the randomness of the channels, the Q function is highly varying with respect to state and action. To facilitate the learning of Q, we propose to apply Fourier mapping of the state with a Gaussian matrix. Via simulations, we show that this approach leads to significant improvement in convergence and SINR performance, as compared to using the state directly.
Abstract Kernel methods are nonparametric feature extraction techniques that attempt to boost the learning capability of machine learning algorithms using nonlinear transformations. However, one major challenge in its basic form is that... more
Abstract Kernel methods are nonparametric feature extraction techniques that attempt to boost the learning capability of machine learning algorithms using nonlinear transformations. However, one major challenge in its basic form is that the computational complexity and the memory requirement do not scale well with respect to the training size. Kernel approximation is commonly employed to resolve this issue. Essentially, kernel approximation is equivalent to learning an approximated subspace in the high-dimensional feature vector space induced and characterized by the kernel function. With streaming data acquisition, approximated subspaces can be constructed adaptively. Explicit feature vectors are then extracted by a transformation onto the approximated subspace and linear learning techniques can be subsequently applied. From a computational point of view, operations in kernel methods can easily be parallelized and modern infrastructures can be utilized to achieve efficient computing. Moreover, the extracted explicit feature vectors can easily be interfaced with other learning techniques.
Various internet services, including cloud providers and social networks collect large amounts of information that needs to be processed for statistical or other reasons without breaching user privacy. We present a novel approach where... more
Various internet services, including cloud providers and social networks collect large amounts of information that needs to be processed for statistical or other reasons without breaching user privacy. We present a novel approach where privacy protection can be viewed as a data transformation problem. The problem is formulated as a pair of classification tasks, (a) a privacy-insensitive and (b) a privacy-sensitive task. Then privacy protection is the requirement that, given the transformed data, no classification algorithm may perform well on the sensitive task while hurting the performance on the insensitive task as little as possible. To that end, we introduce a novel criterion called Multiclass Discriminant Ratio which is optimized using the generalized eigenvalue decomposition of a pair of between class scatter matrices. We then formulate a nonlinear extension of this approach using the kernel GED method. Our proposed methods are evaluated using the Human Activity Recognition data set. Using the kernel projected data the performance of the User recognition task is reduced by 89% while the Activity recognition task is reduced only by 7.8%.
The computational complexity of kernel methods grows at least quadratically with respect to the training size and hence low rank kernel approximation techniques are commonly used. One of the most popular approximations is constructed by... more
The computational complexity of kernel methods grows at least quadratically with respect to the training size and hence low rank kernel approximation techniques are commonly used. One of the most popular approximations is constructed by sub-sampling the training data. In this paper, we present a sampling algorithm called Enhanced Distance Subset Approximation (EDSA) based on a novel kernel function called CLAss-Specific Kernel (CLASK), which applies the idea of subspace clustering to low rank kernel approximation. By representing the kernel matrix based on a class-specific subspace model, it is allowed to use distinct kernel functions for different classes, which provides a better flexibility compared to classical kernel approximation techniques. Experimental results conducted on various UCI datasets are provided in order to verify the proposed techniques.
Various internet services, including cloud providers and social networks collect large amounts of information that needs to be processed for statistical or other reasons without breaching user privacy. We present a novel approach where... more
Various internet services, including cloud providers and social networks collect large amounts of information that needs to be processed for statistical or other reasons without breaching user privacy. We present a novel approach where privacy protection can be viewed as a data transformation problem. The problem is formulated as a pair of classification tasks, (a) a privacy-insensitive and (b) a privacy-sensitive task. Then privacy protection is the requirement that, given the transformed data, no classification algorithm may perform well on the sensitive task while hurting the performance on the insensitive task as little as possible. To that end, we introduce a novel criterion called Multiclass Discriminant Ratio which is optimized using the generalized eigenvalue decomposition of a pair of between class scatter matrices. We then formulate a nonlinear extension of this approach using the kernel GED method. Our proposed methods are evaluated using the Human Activity Recognition data set. Using the kernel projected data the performance of the User recognition task is reduced by 89% while the Activity recognition task is reduced only by 7.8%.
INTRODUCTION: Sleep stage classification is an important task for the timely diagnosis of sleep-related disorders, which are one the most common indicator of illness. OBJECTIVE: An automated sleep scoring implementation with promising... more
INTRODUCTION: Sleep stage classification is an important task for the timely diagnosis of sleep-related disorders, which are one the most common indicator of illness. OBJECTIVE: An automated sleep scoring implementation with promising generalization capabilities is presented, aiding towards eliminating the tedious procedure of manual sleep scoring. METHODS: Two Electroencephalogram (EEG) channels and the Electrooculogram (EOG) channel are utilized as inputs for feature extraction both in the time and frequency domain, while temporal feature changes are utilized in order to capture contextual information of the signals. An ensemble tree-based and a neural network approach are presented at the classification process. RESULTS: A total of 66 subjects belonging to three different groups (healthy, placebo, drug intake) were included in the study. The tree-based classification method outperforms the neural network at all cases. CONCLUSION: State of the art results are achieved, while it is...
Abstract Kernel methods are nonparametric feature extraction techniques that attempt to boost the learning capability of machine learning algorithms using nonlinear transformations. However, one major challenge in its basic form is that... more
Abstract Kernel methods are nonparametric feature extraction techniques that attempt to boost the learning capability of machine learning algorithms using nonlinear transformations. However, one major challenge in its basic form is that the computational complexity and the memory requirement do not scale well with respect to the training size. Kernel approximation is commonly employed to resolve this issue. Essentially, kernel approximation is equivalent to learning an approximated subspace in the high-dimensional feature vector space induced and characterized by the kernel function. With streaming data acquisition, approximated subspaces can be constructed adaptively. Explicit feature vectors are then extracted by a transformation onto the approximated subspace and linear learning techniques can be subsequently applied. From a computational point of view, operations in kernel methods can easily be parallelized and modern infrastructures can be utilized to achieve efficient computing. Moreover, the extracted explicit feature vectors can easily be interfaced with other learning techniques.
The paper addresses motion control of mobile relays implementing cooperative beamforming in a time- and space- varying channel environment. The relays move in a time-slotted fashion and movement is confined within a 2D rectangular plane,... more
The paper addresses motion control of mobile relays implementing cooperative beamforming in a time- and space- varying channel environment. The relays move in a time-slotted fashion and movement is confined within a 2D rectangular plane, discretized on a fine grid. In each time slot, the relays optimally beamform to maximize the Signal-to-Interference+Noise Ratio (SINR) at the destination, subject to power constraints, and determine their optimal next slot positions to which they move by the end of the slot. Prior works have assumed the availability of statistical channel models, which were used to predictively compute the optimal next slot relay positions. In this paper, we propose a novel, model-free, deep Q learning approach to govern relay motion policies, which drops all assumptions on channel model statistics and allows relays to learn solely from experience. Due to the randomness of the channels, the Q function is highly varying with respect to state and action. To facilitate the learning of Q, we propose to apply Fourier mapping of the state with a Gaussian matrix. Via simulations, we show that this approach leads to significant improvement in convergence and SINR performance, as compared to using the state directly.
Abstract In the era of the 4th industrial revolution, a key challenge for the industries is the efficient reduction of the production cost caused by malfunctioning equipment. This paper proposes a Fault Detection and Diagnosis (FDD)... more
Abstract In the era of the 4th industrial revolution, a key challenge for the industries is the efficient reduction of the production cost caused by malfunctioning equipment. This paper proposes a Fault Detection and Diagnosis (FDD) framework for Non-Linear Processes utilizing Dynamic Neural Networks and feature reduction methods. We investigate both types of dynamic neural models, ie. Recurrent Neural Networks -in particular Long Short-Term Memory (LSTM) models, and Time Delay Neural Networks (TDNN). Intending to mitigate the overfitting problem, we also investigated the use of feature reduction techniques such as Non-Negative Matrix Factorization (NMF), Principal Component Analysis (PCA), and kernel PCA (kPCA), as preprocessing steps in our Machine Learning pipeline. The Tennessee Eastman Process (TEP) is used to evaluate our proposed framework on 18 different faults. Our simulations demonstrate that our method outperforms state of the art methods in the majority of those faults.
Conventional recommendation methods such as collaborative filtering cannot be applied when long-term user models are not available. In this paper, we propose two session-based recommendation methods for anonymous browsing in a generic... more
Conventional recommendation methods such as collaborative filtering cannot be applied when long-term user models are not available. In this paper, we propose two session-based recommendation methods for anonymous browsing in a generic e-commerce framework. We represent the data using a graph where items are connected to sessions and to each other based on the order of appearance or their co-occurrence. In the first approach, called Hierarchical Sequence Probability (HSP), recommendations are produced using the probabilities of items’ appearances on certain structures in the graph. Specifically, given a current item during a session, to create a list of recommended next items, we first compute the probabilities of all possible sequential triplets ending in each candidate’s next item, then of all candidate item pairs, and finally of the proposed item. In our second method, called Recurrent Item Co-occurrence (RIC), we generate the recommendation list based on a weighted score produced...
The work in this paper is an extended research of our previous work: M. Delianidi, K. Diamantaras, G. Chrysogonidis, and V. Nikiforidis,“Student performance prediction using dynamic neural models,” in Fourteenth International Conference... more
The work in this paper is an extended research of our previous work: M. Delianidi, K. Diamantaras, G. Chrysogonidis, and V. Nikiforidis,“Student performance prediction using dynamic neural models,” in Fourteenth International Conference on Educational Data Mining (EDM 2021), 2021, pp. 46–54. In both works we study the task of predicting a student's performance in a series of questions based, at each step, on the answers he/she has given on the previous questions. We propose a recurrent neural network approach where the dynamic part of the model is a Bidirectional GRU layer. In this work, we differentiate the model architecture from the earlier paper by imposing that the dynamic part is based exclusively on the history of previous question/answers, not including the current question. Then, the subsequent classification part is fed by the output of the dynamic part and the current question. In this way, the first part estimates the student's knowledge state and represents it w...
The objective of our research is to investigate new digital techniques and tools, offering the audience innovative, attractive, enhanced and accessible experiences. The project focuses on performing arts, particularly theatre, aiming at... more
The objective of our research is to investigate new digital techniques and tools, offering the audience innovative, attractive, enhanced and accessible experiences. The project focuses on performing arts, particularly theatre, aiming at designing, implementing, experimenting and evaluating technologies and tools that expand the semiotic code of a performance by offering new opportunities and aesthetic means in stage art and by introducing parallel accessible narrative flows. In our novel paradigm, modern technologies emphasize the stage elements providing a multilevel, intense and immersive theatrical experience. Moreover, lighting, video projections, audio clips and digital characters are incorporated, bringing unique aesthetic features. We also attempt to remove sensory and language barriers faced by some audiences. Accessibility features consist of subtitles, sign language and audio description. The project emphasises on natural language processing technologies, embedded communic...
ABSTRACT
This study scrutinizes the existing literature regarding the use of augmented reality and gamification in education to establish its theoretical basis. A systematic literature review following the Preferred Reporting Items for Systematic... more
This study scrutinizes the existing literature regarding the use of augmented reality and gamification in education to establish its theoretical basis. A systematic literature review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement was conducted. To provide complete and valid information, all types of related studies for all educational stages and subjects throughout the years were investigated. In total, 670 articles from 5 databases (Scopus, Web of Science, Google Scholar, IEEE, and ERIC) were examined. Based on the results, using augmented reality and gamification in education can yield several benefits for students, assist educators, improve the educational process, and facilitate the transition toward technology-enhanced learning when used in a student-centered manner, following proper educational approaches and strategies and taking students’ knowledge, interests, unique characteristics, and personality traits into considerati...
With a view to creating a mixed reality that combines coexisting real and virtual objects and to providing users with real-time access to information in an interactive manner, augmented reality enriches users’ physical environment by... more
With a view to creating a mixed reality that combines coexisting real and virtual objects and to providing users with real-time access to information in an interactive manner, augmented reality enriches users’ physical environment by incorporating digital and real objects and rendering them in the physical environment in the proper time and spatial framework. Due to its nature, augmented reality can be combined with and exploit other innovative technologies in order to improve its efficiency and potentials. Some such technologies are semantic web, knowledge graphs and deep learning. The study main purpose and contribution is to showcase the benefits of developing semantically enriched augmented reality applications and to present a system architecture for developing such applications as well as to showcase and assess an augmented reality application developed following the proposed architecture. The specific application aims at facilitating end-users’ day-to-day activities, enhancin...
MIMO transmit arrays allow for flexible design of the transmit beampattern. However, the large number of elements required to achieve certain performance using uniform linear arrays (ULA) maybe be too costly. This motivated the need for... more
MIMO transmit arrays allow for flexible design of the transmit beampattern. However, the large number of elements required to achieve certain performance using uniform linear arrays (ULA) maybe be too costly. This motivated the need for thinned arrays by appropriately selecting a small number of elements so that the full array beampattern is preserved. In this paper we propose Learn-to-Select (L2S), a novel machine learning model for selecting antennas from a dense ULA employing a combination of multiple Softmax layers constrained by an orthogonalization criterion. The proposed approach can be efficiently scaled for larger problems as it avoids the combinatorial explosion of the selection problem. It also offers a flexible array design framework as the selection problem can be easily formulated for any metric.
Automated sentiment analysis and opinion mining is a complex process concerning the extraction of useful subjective information from text. The explosion of user generated content on the Web, especially the fact that millions of users, on... more
Automated sentiment analysis and opinion mining is a complex process concerning the extraction of useful subjective information from text. The explosion of user generated content on the Web, especially the fact that millions of users, on a daily basis, express their opinions on products and services to blogs, wikis, social networks, message boards, etc., render the reliable, automated export of sentiments and opinions from unstructured text crucial for several commercial applications. In this paper, we present a novel hybrid vectorization approach for textual resources that combines a weighted variant of the popular Word2Vec representation (based on Term Frequency-Inverse Document Frequency) representation and with a Bag- of-Words representation and a vector of lexicon-based sentiment values. The proposed text representation approach is assessed through the application of several machine learning classification algorithms on a dataset that is used extensively in literature for senti...
We address the problem of predicting the correctness of the student’s response on the next exam question based on their previous interactions in the course of their learning and evaluation process. We model the student performance as a... more
We address the problem of predicting the correctness of the student’s response on the next exam question based on their previous interactions in the course of their learning and evaluation process. We model the student performance as a dynamic problem and compare the two major classes of dynamic neural architectures for its solution, namely the finite-memory Time Delay Neural Networks (TDNN) and the potentially infinite-memory Recurrent Neural Networks (RNN). Since the next response is a function of the knowledge state of the student and this, in turn, is a function of their previous responses and the skills associated with the previous questions, we propose a two-part network architecture. The first part employs a dynamic neural network (either TDNN or RNN) to trace the student knowledge state. The second part applies on top of the dynamic part and it is a multi-layer feed-forward network which completes the classification task of predicting the student response based on our estima...
Research into session-based recommendation systems (SBSR) has attracted a lot of attention, but each study focuses on a specific class of methods. This work examines and evaluates a large range of methods, from simpler statistical... more
Research into session-based recommendation systems (SBSR) has attracted a lot of attention, but each study focuses on a specific class of methods. This work examines and evaluates a large range of methods, from simpler statistical co-occurrence methods to embeddings and SotA deep learning methods. This paper analyzes theoretical and practical issues in developing and evaluating methods for SBSR in e-commerce applications, where user profiles and purchase data do not exist. The major tasks of SBRS are reviewed and studied, namely: prediction of next-item, next-basket and purchase intent. For physical retail shopping where no information about the current session exists, we treat the previous baskets purchased by the user as previous sessions drawn from a loyalty system. Mobile application scenarios such as push notifications and calling tune recommendations are also presented. Recommender models using graphs, embeddings and deep learning methods are studied and evaluated in all SBRS ...

And 230 more

Το βιβλίο αυτό είναι ένας συστηματικός και κατανοητός οδηγός που εισάγει τον αναγνώστη στον δυναμικά εξελισσόμενο κόσμο της παράλληλης επεξεργασίας. Είναι ιδανικό για φοιτητές του χώρου, ερευνητές και προγραμματιστές, με μόνη προϋπόθεση... more
Το βιβλίο αυτό είναι ένας συστηματικός και κατανοητός οδηγός που εισάγει τον αναγνώστη στον δυναμικά εξελισσόμενο κόσμο της παράλληλης επεξεργασίας.

Είναι ιδανικό για φοιτητές του χώρου, ερευνητές και προγραμματιστές, με μόνη προϋπόθεση ορισμένες βασικές γνώσεις αλγοριθμικής και προγραμματισμού.

Στο βιβλίο μελετάται ο σχεδιασμός παράλληλων συστημάτων, τόσο σε επίπεδο αρχιτεκτονικής υπολογιστών όσο και σε επίπεδο προγραμματισμού. Εξηγούνται βασικές έννοιες, όπως οι πολύ-υπολογιστές και οι πολυ-επεξεργαστές, παρουσιάζονται οι κύριες μετρικές αξιολόγησης της επίδοσης των παράλληλων αλγορίθμων, και περιγράφονται βασικές αρχιτεκτονικές δικτύων παράλληλης επεξεργασίας. Επιπλέον, γίνεται εκτενής αναφορά στην υλοποίηση παράλληλων αλγορίθμων σε αρχιτεκτονικές παράλληλης επεξεργασίας κοινής χρήσης, όπως στις κάρτες γραφικών (GPU) μέσω των προτύπων CUDA και τη γλώσσα OpenCL.

Περιεχόμενα:

Αρχιτεκτονικές παράλληλης επεξεργασίας
Δίκτυα διασύνδεσης
Γενικά ζητήματα παραλληλοποίησης
Παραλληλοποίηση εργασιών
Ένθετοι βρόχοι
Εξαρτήσεις σε ένθετους βρόχους
Χρονοδρομολόγηση
Απεικόνιση
Υπολογισμοί στην GPU: Παράλληλη επεξεργασία σε κάρτες γραφικών
Μαθηματικά και αλγοριθμικά εργαλεία
Απεικόνιση ένθετων βρόχων
Συστολικές συστοιχίες επεξεργαστών
Research Interests:
Το αντικείμενο των τεχνητών νευρωνικών δικτύων γνωρίζει ραγδαία ανάπτυξη τα τελευταία 25 χρόνια και αποτελεί πλέον ένα ευρύ και αυτόνομο επιστημονικό πεδίο, που σχετίζεται με το γενικότερο πλαίσιο της τεχνητής νοημοσύνης και των ευφυών... more
Το αντικείμενο των τεχνητών νευρωνικών δικτύων γνωρίζει ραγδαία ανάπτυξη τα τελευταία 25 χρόνια και αποτελεί πλέον ένα ευρύ και αυτόνομο επιστημονικό πεδίο, που σχετίζεται με το γενικότερο πλαίσιο της τεχνητής νοημοσύνης και των ευφυών συστημάτων. Στο βιβλίο αυτό περιγράφονται με συστηματικό τρόπο τα σημαντικότερα μοντέλα νευρωνικών δικτύων, ξεκινώντας από το απλό μοντέλο Perceptron του ενός νευρώνα και συνεχίζοντας με τα δίκτυα Perceptron πολλών στρωμάτων και τον αλγόριθμο εκπαίδευσης Back Propagation, τα δίκτυα Radial Basis Function (RBF), τα αυτο-οργανούμενα δίκτυα, όπως το μοντέλο SOM, τα γραμμικά και μη-γραμμικά Χεμπιανά μοντέλα μάθησης, τις μηχανές διανυσμάτων υποστήριξης (SVM), τα δυναμικά μοντέλα, όπως το μοντέλο Hopfield, και πολλά άλλα.
Research Interests:
Understanding the underlying principles of biological perceptual systems is of vital importance not only to neuroscientists, but, increasingly, to engineers and computer scientists who wish to develop artificial perceptual systems. In... more
Understanding the underlying principles of biological perceptual systems is of vital importance not only to neuroscientists, but, increasingly, to engineers and computer scientists who wish to develop artificial perceptual systems. In this original and groundbreaking work, the authors systematically examine the relationship between the powerful technique of Principal Component Analysis (PCA) and neural networks. Principal Component Neural Networks focuses on issues pertaining to both neural network models (i.e., network structures and algorithms) and theoretical extensions of PCA. In addition, it provides basic review material in mathematics and neurobiology. This book presents neural models originating from both the Hebbian learning rule and least squares learning rules, such as back–propagation. Its ultimate objective is to provide a synergistic exploration of the mathematical, algorithmic, application, and architectural aspects of principal component neural networks. Especially valuable to researchers and advanced students in neural network theory and signal processing, this book offers application examples from a variety of areas, including high–resolution spectral estimation, system identification, image compression, and pattern recognition.
Research Interests: