[go: up one dir, main page]

Machine Learning

2024-10-17 | | Total: 180

#1 Metal Price Spike Prediction via a Neurosymbolic Ensemble Approach [PDF3] [Copy] [Kimi4] [REL]

Authors: Nathaniel Lee ; Noel Ngu ; Harshdeep Singh Sahdev ; Pramod Motaganahall ; Al Mehdi Saadat Chowdhury ; Bowen Xi ; Paulo Shakarian

Predicting price spikes in critical metals such as Cobalt, Copper, Magnesium, and Nickel is crucial for mitigating economic risks associated with global trends like the energy transition and reshoring of manufacturing. While traditional models have focused on regression-based approaches, our work introduces a neurosymbolic ensemble framework that integrates multiple neural models with symbolic error detection and correction rules. This framework is designed to enhance predictive accuracy by correcting individual model errors and offering interpretability through rule-based explanations. We show that our method provides up to 6.42% improvement in precision, 29.41% increase in recall at 13.24% increase in F1 over the best performing neural models. Further, our method, as it is based on logical rules, has the benefit of affording an explanation as to which combination of neural models directly contribute to a given prediction.

Subject: Machine Learning

Publish: 2024-10-16 17:58:34 UTC

#2 Context-Scaling versus Task-Scaling in In-Context Learning [PDF2] [Copy] [Kimi] [REL]

Authors: Amirhesam Abedsoltan ; Adityanarayanan Radhakrishnan ; Jingfeng Wu ; Mikhail Belkin

Transformers exhibit In-Context Learning (ICL), where these models solve new tasks by using examples in the prompt without additional training. In our work, we identify and analyze two key components of ICL: (1) context-scaling, where model performance improves as the number of in-context examples increases and (2) task-scaling, where model performance improves as the number of pre-training tasks increases. While transformers are capable of both context-scaling and task-scaling, we empirically show that standard Multi-Layer Perceptrons (MLPs) with vectorized input are only capable of task-scaling. To understand how transformers are capable of context-scaling, we first propose a significantly simplified transformer architecture without key, query, value weights. We show that it performs ICL comparably to the original GPT-2 model in various statistical learning tasks including linear regression, teacher-student settings. Furthermore, a single block of our simplified transformer can be viewed as data dependent feature map followed by an MLP. This feature map on its own is a powerful predictor that is capable of context-scaling but is not capable of task-scaling. We show empirically that concatenating the output of this feature map with vectorized data as an input to MLPs enables both context-scaling and task-scaling. This finding provides a simple setting to study context and task-scaling for ICL.

Subjects: Machine Learning ; Machine Learning

Publish: 2024-10-16 17:58:08 UTC

#3 Geometry-Aware Generative Autoencoders for Warped Riemannian Metric Learning and Generative Modeling on Data Manifolds [PDF] [Copy] [Kimi] [REL]

Authors: Xingzhi Sun ; Danqi Liao ; Kincaid MacDonald ; Yanlei Zhang ; Chen Liu ; Guillaume Huguet ; Guy Wolf ; Ian Adelstein ; Tim G. J. Rudner ; Smita Krishnaswamy

Rapid growth of high-dimensional datasets in fields such as single-cell RNA sequencing and spatial genomics has led to unprecedented opportunities for scientific discovery, but it also presents unique computational and statistical challenges. Traditional methods struggle with geometry-aware data generation, interpolation along meaningful trajectories, and transporting populations via feasible paths. To address these issues, we introduce Geometry-Aware Generative Autoencoder (GAGA), a novel framework that combines extensible manifold learning with generative modeling. GAGA constructs a neural network embedding space that respects the intrinsic geometries discovered by manifold learning and learns a novel warped Riemannian metric on the data space. This warped metric is derived from both the points on the data manifold and negative samples off the manifold, allowing it to characterize a meaningful geometry across the entire latent space. Using this metric, GAGA can uniformly sample points on the manifold, generate points along geodesics, and interpolate between populations across the learned manifold. GAGA shows competitive performance in simulated and real world datasets, including a 30% improvement over the state-of-the-art methods in single-cell population-level trajectory inference.

Subjects: Machine Learning ; Differential Geometry ; Machine Learning

Publish: 2024-10-16 17:53:26 UTC

#4 The Non-Local Model Merging Problem: Permutation Symmetries and Variance Collapse [PDF] [Copy] [Kimi] [REL]

Authors: Ekansh Sharma ; Daniel M. Roy ; Gintare Karolina Dziugaite

Model merging aims to efficiently combine the weights of multiple expert models, each trained on a specific task, into a single multi-task model, with strong performance across all tasks. When applied to all but the last layer of weights, existing methods -- such as Task Arithmetic, TIES-merging, and TALL mask merging -- work well to combine expert models obtained by fine-tuning a common foundation model, operating within a "local" neighborhood of the foundation model. This work explores the more challenging scenario of "non-local" merging, which we find arises when an expert model changes significantly during pretraining or where the expert models do not even share a common foundation model. We observe that standard merging techniques often fail to generalize effectively in this non-local setting, even when accounting for permutation symmetries using standard techniques. We identify that this failure is, in part, due to "variance collapse", a phenomenon identified also in the setting of linear mode connectivity by Jordan et al. (2023). To address this, we propose a multi-task technique to re-scale and shift the output activations of the merged model for each task, aligning its output statistics with those of the corresponding task-specific expert models. Our experiments demonstrate that this correction significantly improves the performance of various model merging approaches in non-local settings, providing a strong baseline for future research on this problem.

Subject: Machine Learning

Publish: 2024-10-16 17:41:59 UTC

#5 Initialization Method for Factorization Machine Based on Low-Rank Approximation for Constructing a Corrected Approximate Ising Model [PDF] [Copy] [Kimi] [REL]

Authors: Yuya Seki ; Hyakka Nakada ; Shu Tanaka

This paper presents an initialization method that can approximate a given approximate Ising model with a high degree of accuracy using the Factorization Machine (FM), a machine learning model. The construction of Ising models using FM is applied to the combinatorial optimization problem using the factorization machine with quantum annealing. It is anticipated that the optimization performance of FMQA will be enhanced through the implementation of the warm-start method. Nevertheless, the optimal initialization method for leveraging the warm-start approach in FMQA remains undetermined. Consequently, the present study compares a number of initialization methods and identifies the most appropriate for use with a warm-start in FMQA through numerical experimentation. Furthermore, the properties of the proposed FM initialization method are analyzed using random matrix theory, demonstrating that the approximation accuracy of the proposed method is not significantly influenced by the specific Ising model under consideration. The findings of this study will facilitate the advancement of combinatorial optimization problem-solving through the use of Ising machines.

Subject: Machine Learning

Publish: 2024-10-16 17:06:55 UTC

#6 CREAM: Consistency Regularized Self-Rewarding Language Models [PDF2] [Copy] [Kimi] [REL]

Authors: Zhaoyang Wang ; Weilei He ; Zhiyuan Liang ; Xuchao Zhang ; Chetan Bansal ; Ying Wei ; Weitong Zhang ; Huaxiu Yao

Recent self-rewarding large language models (LLM) have successfully applied LLM-as-a-Judge to iteratively improve the alignment performance without the need of human annotations for preference data. These methods commonly utilize the same LLM to act as both the policy model (which generates responses) and the reward model (which scores and ranks those responses). The ranked responses are then used as preference pairs to train the LLM via direct alignment technologies (e.g. DPO). However, it is noteworthy that throughout this process, there is no guarantee of accuracy in the rewarding and ranking, which is critical for ensuring accurate rewards and high-quality preference data. Empirical results from relatively small LLMs (e.g., 7B parameters) also indicate that improvements from self-rewarding may diminish after several iterations in certain situations, which we hypothesize is due to accumulated bias in the reward system. This bias can lead to unreliable preference data for training the LLM. To address this issue, we first formulate and analyze the generalized iterative preference fine-tuning framework for self-rewarding language model. We then introduce the regularization to this generalized framework to mitigate the overconfident preference labeling in the self-rewarding process. Based on this theoretical insight, we propose a Consistency Regularized sElf-rewarding lAnguage Model (CREAM) that leverages the rewarding consistency across different iterations to regularize the self-rewarding training, helping the model to learn from more reliable preference data. With this explicit regularization, our empirical results demonstrate the superiority of CREAM in improving both reward consistency and alignment performance. The code is publicly available at https://github.com/Raibows/CREAM.

Subjects: Machine Learning ; Computation and Language

Publish: 2024-10-16 16:51:01 UTC

#7 Counterfactual Generative Modeling with Variational Causal Inference [PDF1] [Copy] [Kimi2] [REL]

Authors: Yulun Wu ; Louie McConnell ; Claudia Iriondo

Estimating an individual's potential outcomes under counterfactual treatments is a challenging task for traditional causal inference and supervised learning approaches when the outcome is high-dimensional (e.g. gene expressions, facial images) and covariates are relatively limited. In this case, to predict one's outcomes under counterfactual treatments, it is crucial to leverage individual information contained in its high-dimensional observed outcome in addition to the covariates. Prior works using variational inference in counterfactual generative modeling have been focusing on neural adaptations and model variants within the conditional variational autoencoder formulation, which we argue is fundamentally ill-suited to the notion of counterfactual in causal inference. In this work, we present a novel variational Bayesian causal inference framework and its theoretical backings to properly handle counterfactual generative modeling tasks, through which we are able to conduct counterfactual supervision end-to-end during training without any counterfactual samples, and encourage latent disentanglement that aids the correct identification of causal effect in counterfactual generations. In experiments, we demonstrate the advantage of our framework compared to state-of-the-art models in counterfactual generative modeling on multiple benchmarks.

Subjects: Machine Learning ; Artificial Intelligence ; Statistics Theory ; Machine Learning

Publish: 2024-10-16 16:44:12 UTC

#8 Transformer based super-resolution downscaling for regional reanalysis: Full domain vs tiling approaches [PDF] [Copy] [Kimi] [REL]

Authors: Antonio Pérez ; Mario Santa Cruz ; Daniel San Martín ; José Manuel Gutiérrez

Super-resolution (SR) is a promising cost-effective downscaling methodology for producing high-resolution climate information from coarser counterparts. A particular application is downscaling regional reanalysis outputs (predictand) from the driving global counterparts (predictor). This study conducts an intercomparison of various SR downscaling methods focusing on temperature and using the CERRA reanalysis (5.5 km resolution, produced with a regional atmospheric model driven by ERA5) as example. The method proposed in this work is the Swin transformer and two alternative methods are used as benchmark (fully convolutional U-Net and convolutional and dense DeepESD) as well as the simple bicubic interpolation. We compare two approaches, the standard one using the full domain as input and a more scalable tiling approach, dividing the full domain into tiles that are used as input. The methods are trained to downscale CERRA surface temperature, based on temperature information from the driving ERA5; in addition, the tiling approach includes static orographic information. We show that the tiling approach, which requires spatial transferability, comes at the cost of a lower performance (although it outperforms some full-domain benchmarks), but provides an efficient scalable solution that allows SR reduction on a pan-European scale and is valuable for real-time applications.

Subjects: Machine Learning ; Artificial Intelligence

Publish: 2024-10-16 16:42:20 UTC

#9 How Does Variance Shape the Regret in Contextual Bandits? [PDF] [Copy] [Kimi] [REL]

Authors: Zeyu Jia ; Jian Qian ; Alexander Rakhlin ; Chen-Yu Wei

We consider realizable contextual bandits with general function approximation, investigating how small reward variance can lead to better-than-minimax regret bounds. Unlike in minimax bounds, we show that the eluder dimension $d_\text{elu}$$-$a complexity measure of the function class$-$plays a crucial role in variance-dependent bounds. We consider two types of adversary: (1) Weak adversary: The adversary sets the reward variance before observing the learner's action. In this setting, we prove that a regret of $\Omega(\sqrt{\min\{A,d_\text{elu}\}\Lambda}+d_\text{elu})$ is unavoidable when $d_{\text{elu}}\leq\sqrt{AT}$, where $A$ is the number of actions, $T$ is the total number of rounds, and $\Lambda$ is the total variance over $T$ rounds. For the $A\leq d_\text{elu}$ regime, we derive a nearly matching upper bound $\tilde{O}(\sqrt{A\Lambda}+d_\text{elu})$ for the special case where the variance is revealed at the beginning of each round. (2) Strong adversary: The adversary sets the reward variance after observing the learner's action. We show that a regret of $\Omega(\sqrt{d_\text{elu}\Lambda}+d_\text{elu})$ is unavoidable when $\sqrt{d_\text{elu}\Lambda}+d_\text{elu}\leq\sqrt{AT}$. In this setting, we provide an upper bound of order $\tilde{O}(d_\text{elu}\sqrt{\Lambda}+d_\text{elu})$. Furthermore, we examine the setting where the function class additionally provides distributional information of the reward, as studied by Wang et al. (2024). We demonstrate that the regret bound $\tilde{O}(\sqrt{d_\text{elu}\Lambda}+d_\text{elu})$ established in their work is unimprovable when $\sqrt{d_{\text{elu}}\Lambda}+d_\text{elu}\leq\sqrt{AT}$. However, with a slightly different definition of the total variance and with the assumption that the reward follows a Gaussian distribution, one can achieve a regret of $\tilde{O}(\sqrt{A\Lambda}+d_\text{elu})$.

Subjects: Machine Learning ; Machine Learning

Publish: 2024-10-16 16:20:07 UTC

#10 Sarcasm Detection in a Less-Resourced Language [PDF] [Copy] [Kimi] [REL]

Authors: Lazar Đoković ; Marko Robnik-Šikonja

The sarcasm detection task in natural language processing tries to classify whether an utterance is sarcastic or not. It is related to sentiment analysis since it often inverts surface sentiment. Because sarcastic sentences are highly dependent on context, and they are often accompanied by various non-verbal cues, the task is challenging. Most of related work focuses on high-resourced languages like English. To build a sarcasm detection dataset for a less-resourced language, such as Slovenian, we leverage two modern techniques: a machine translation specific medium-size transformer model, and a very large generative language model. We explore the viability of translated datasets and how the size of a pretrained transformer affects its ability to detect sarcasm. We train ensembles of detection models and evaluate models' performance. The results show that larger models generally outperform smaller ones and that ensembling can slightly improve sarcasm detection performance. Our best ensemble approach achieves an $\text{F}_1$-score of 0.765 which is close to annotators' agreement in the source language.

Subjects: Machine Learning ; Computation and Language

Publish: 2024-10-16 16:10:59 UTC

#11 Neural-based Control for CubeSat Docking Maneuvers [PDF] [Copy] [Kimi] [REL]

Authors: Matteo Stoisa ; Federica Paganelli Azza ; Luca Romanelli ; Mattia Varile

Autonomous Rendezvous and Docking (RVD) have been extensively studied in recent years, addressing the stringent requirements of spacecraft dynamics variations and the limitations of GNC systems. This paper presents an innovative approach employing Artificial Neural Networks (ANN) trained through Reinforcement Learning (RL) for autonomous spacecraft guidance and control during the final phase of the rendezvous maneuver. The proposed strategy is easily implementable onboard and offers fast adaptability and robustness to disturbances by learning control policies from experience rather than relying on predefined models. Extensive Monte Carlo simulations within a relevant environment are conducted in 6DoF settings to validate our approach, along with hardware tests that demonstrate deployment feasibility. Our findings highlight the efficacy of RL in assuring the adaptability and efficiency of spacecraft RVD, offering insights into future mission expectations.

Subject: Machine Learning

Publish: 2024-10-16 16:05:46 UTC

#12 Optimizing Multi-Task Learning for Accurate Spacecraft Pose Estimation [PDF] [Copy] [Kimi] [REL]

Authors: Francesco Evangelisti ; Francesco Rossi ; Tobia Giani ; Ilaria Bloise ; Mattia Varile

Accurate satellite pose estimation is crucial for autonomous guidance, navigation, and control (GNC) systems in in-orbit servicing (IOS) missions. This paper explores the impact of different tasks within a multi-task learning (MTL) framework for satellite pose estimation using monocular images. By integrating tasks such as direct pose estimation, keypoint prediction, object localization, and segmentation into a single network, the study aims to evaluate the reciprocal influence between tasks by testing different multi-task configurations thanks to the modularity of the convolutional neural network (CNN) used in this work. The trends of mutual bias between the analyzed tasks are found by employing different weighting strategies to further test the robustness of the findings. A synthetic dataset was developed to train and test the MTL network. Results indicate that direct pose estimation and heatmap-based pose estimation positively influence each other in general, while both the bounding box and segmentation tasks do not provide significant contributions and tend to degrade the overall estimation accuracy.

Subject: Machine Learning

Publish: 2024-10-16 15:44:15 UTC

#13 Context Matters: Leveraging Contextual Features for Time Series Forecasting [PDF] [Copy] [Kimi] [REL]

Authors: Sameep Chattopadhyay ; Pulkit Paliwal ; Sai Shankar Narasimhan ; Shubhankar Agarwal ; Sandeep P. Chinchali

Time series forecasts are often influenced by exogenous contextual features in addition to their corresponding history. For example, in financial settings, it is hard to accurately predict a stock price without considering public sentiments and policy decisions in the form of news articles, tweets, etc. Though this is common knowledge, the current state-of-the-art (SOTA) forecasting models fail to incorporate such contextual information, owing to its heterogeneity and multimodal nature. To address this, we introduce ContextFormer, a novel plug-and-play method to surgically integrate multimodal contextual information into existing pre-trained forecasting models. ContextFormer effectively distills forecast-specific information from rich multimodal contexts, including categorical, continuous, time-varying, and even textual information, to significantly enhance the performance of existing base forecasters. ContextFormer outperforms SOTA forecasting models by up to 30% on a range of real-world datasets spanning energy, traffic, environmental, and financial domains.

Subjects: Machine Learning ; Artificial Intelligence

Publish: 2024-10-16 15:36:13 UTC

#14 New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes [PDF] [Copy] [Kimi] [REL]

Authors: Yanyun Wang ; Li Liu ; Zi Liang ; Qingqing Ye ; Haibo Hu

Adversarial Training (AT) is one of the most effective methods to enhance the robustness of DNNs. However, existing AT methods suffer from an inherent trade-off between adversarial robustness and clean accuracy, which seriously hinders their real-world deployment. While this problem has been widely studied within the current AT paradigm, existing AT methods still typically experience a reduction in clean accuracy by over 10% to date, without significant improvements in robustness compared with simple baselines like PGD-AT. This inherent trade-off raises a question: whether the current AT paradigm, which assumes to learn the corresponding benign and adversarial samples as the same class, inappropriately combines clean and robust objectives that may be essentially inconsistent. In this work, we surprisingly reveal that up to 40% of CIFAR-10 adversarial samples always fail to satisfy such an assumption across various AT methods and robust models, explicitly indicating the improvement room for the current AT paradigm. Accordingly, to relax the tension between clean and robust learning derived from this overstrict assumption, we propose a new AT paradigm by introducing an additional dummy class for each original class, aiming to accommodate the hard adversarial samples with shifted distribution after perturbation. The robustness w.r.t. these adversarial samples can be achieved by runtime recovery from the predicted dummy classes to their corresponding original ones, eliminating the compromise with clean learning. Building on this new paradigm, we propose a novel plug-and-play AT technology named DUmmy Classes-based Adversarial Training (DUCAT). Extensive experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that the DUCAT concurrently improves clean accuracy and adversarial robustness compared with state-of-the-art benchmarks, effectively breaking the existing inherent trade-off.

Subject: Machine Learning

Publish: 2024-10-16 15:36:10 UTC

#15 Explanation-Preserving Augmentation for Semi-Supervised Graph Representation Learning [PDF1] [Copy] [Kimi] [REL]

Authors: Zhuomin Chen ; Jingchao Ni ; Hojat Allah Salehi ; Xu Zheng ; Esteban Schafir ; Farhad Shirani ; Dongsheng Luo

Graph representation learning (GRL), enhanced by graph augmentation methods, has emerged as an effective technique achieving performance improvements in wide tasks such as node classification and graph classification. In self-supervised GRL, paired graph augmentations are generated from each graph. Its objective is to infer similar representations for augmentations of the same graph, but maximally distinguishable representations for augmentations of different graphs. Analogous to image and language domains, the desiderata of an ideal augmentation method include both (1) semantics-preservation; and (2) data-perturbation; i.e., an augmented graph should preserve the semantics of its original graph while carrying sufficient variance. However, most existing (un-)/self-supervised GRL methods focus on data perturbation but largely neglect semantics preservation. To address this challenge, in this paper, we propose a novel method, Explanation-Preserving Augmentation (EPA), that leverages graph explanation techniques for generating augmented graphs that can bridge the gap between semantics-preservation and data-perturbation. EPA first uses a small number of labels to train a graph explainer to infer the sub-structures (explanations) that are most relevant to a graph's semantics. These explanations are then used to generate semantics-preserving augmentations for self-supervised GRL, namely EPA-GRL. We demonstrate theoretically, using an analytical example, and through extensive experiments on a variety of benchmark datasets that EPA-GRL outperforms the state-of-the-art (SOTA) GRL methods, which are built upon semantics-agnostic data augmentations.

Subject: Machine Learning

Publish: 2024-10-16 15:18:03 UTC

#16 Position Specific Scoring Is All You Need? Revisiting Protein Sequence Classification Tasks [PDF] [Copy] [Kimi] [REL]

Authors: Sarwan Ali ; Taslim Murad ; Prakash Chourasia ; Haris Mansoor ; Imdad Ullah Khan ; Pin-Yu Chen ; Murray Patterson

Understanding the structural and functional characteristics of proteins are crucial for developing preventative and curative strategies that impact fields from drug discovery to policy development. An important and popular technique for examining how amino acids make up these characteristics of the protein sequences with position-specific scoring (PSS). While the string kernel is crucial in natural language processing (NLP), it is unclear if string kernels can extract biologically meaningful information from protein sequences, despite the fact that they have been shown to be effective in the general sequence analysis tasks. In this work, we propose a weighted PSS kernel matrix (or W-PSSKM), that combines a PSS representation of protein sequences, which encodes the frequency information of each amino acid in a sequence, with the notion of the string kernel. This results in a novel kernel function that outperforms many other approaches for protein sequence classification. We perform extensive experimentation to evaluate the proposed method. Our findings demonstrate that the W-PSSKM significantly outperforms existing baselines and state-of-the-art methods and achieves up to 45.1\% improvement in classification accuracy.

Subject: Machine Learning

Publish: 2024-10-16 15:16:50 UTC

#17 Constrained Posterior Sampling: Time Series Generation with Hard Constraints [PDF] [Copy] [Kimi] [REL]

Authors: Sai Shankar Narasimhan ; Shubhankar Agarwal ; Litu Rout ; Sanjay Shakkottai ; Sandeep P. Chinchali

Generating realistic time series samples is crucial for stress-testing models and protecting user privacy by using synthetic data. In engineering and safety-critical applications, these samples must meet certain hard constraints that are domain-specific or naturally imposed by physics or nature. Consider, for example, generating electricity demand patterns with constraints on peak demand times. This can be used to stress-test the functioning of power grids during adverse weather conditions. Existing approaches for generating constrained time series are either not scalable or degrade sample quality. To address these challenges, we introduce Constrained Posterior Sampling (CPS), a diffusion-based sampling algorithm that aims to project the posterior mean estimate into the constraint set after each denoising update. Notably, CPS scales to a large number of constraints (~100) without requiring additional training. We provide theoretical justifications highlighting the impact of our projection step on sampling. Empirically, CPS outperforms state-of-the-art methods in sample quality and similarity to real time series by around 10% and 42%, respectively, on real-world stocks, traffic, and air quality datasets.

Subjects: Machine Learning ; Artificial Intelligence ; Signal Processing

Publish: 2024-10-16 15:16:04 UTC

#18 An Exact Finite-dimensional Explicit Feature Map for Kernel Functions [PDF] [Copy] [Kimi] [REL]

Authors: Kamaledin Ghiasi-Shirazi ; Mohammadreza Qaraei

Kernel methods in machine learning use a kernel function that takes two data points as input and returns their inner product after mapping them to a Hilbert space, implicitly and without actually computing the mapping. For many kernel functions, such as Gaussian and Laplacian kernels, the feature space is known to be infinite-dimensional, making operations in this space possible only implicitly. This implicit nature necessitates algorithms to be expressed using dual representations and the kernel trick. In this paper, given an arbitrary kernel function, we introduce an explicit, finite-dimensional feature map for any arbitrary kernel function that ensures the inner product of data points in the feature space equals the kernel function value, during both training and testing. The existence of this explicit mapping allows for kernelized algorithms to be formulated in their primal form, without the need for the kernel trick or the dual representation. As a first application, we demonstrate how to derive kernelized machine learning algorithms directly, without resorting to the dual representation, and apply this method specifically to PCA. As another application, without any changes to the t-SNE algorithm and its implementation, we use it for visualizing the feature space of kernel functions.

Subject: Machine Learning

Publish: 2024-10-16 14:55:11 UTC

#19 Towards Graph Foundation Models: The Perspective of Zero-shot Reasoning on Knowledge Graphs [PDF] [Copy] [Kimi] [REL]

Authors: Kai Wang ; Siqiang Luo

Inspired by the success of artificial general intelligence, there is a trend towards developing Graph Foundation Models that excel in generalization across various graph tasks and domains. However, current models often require extensive training or fine-tuning to capture structural and semantic insights on new graphs, which limits their versatility. In this work, we explore graph foundation models from the perspective of zero-shot reasoning on Knowledge Graphs (KGs). Our focus is on utilizing KGs as a unified topological structure to tackle diverse tasks, while addressing semantic isolation challenges in KG reasoning to effectively integrate diverse semantic and structural features. This brings us new methodological insights into KG reasoning, as well as high generalizability towards foundation models in practice. Methodologically, we introduce SCORE, a unified graph reasoning framework that effectively generalizes diverse graph tasks using zero-shot learning. At the core of SCORE is semantic conditional message passing, a technique designed to capture both structural and semantic invariances in graphs, with theoretical backing for its expressive power. Practically, we evaluate the zero-shot reasoning capability of SCORE using 38 diverse graph datasets, covering node-level, link-level, and graph-level tasks across multiple domains. Our experiments reveal a substantial performance improvement over prior foundation models and supervised baselines, highlighting the efficacy and adaptability of our approach.

Subjects: Machine Learning ; Artificial Intelligence

Publish: 2024-10-16 14:26:08 UTC

#20 Low-Rank Adversarial PGD Attack [PDF1] [Copy] [Kimi] [REL]

Authors: Dayana Savostianova ; Emanuele Zangrando ; Francesco Tudisco

Adversarial attacks on deep neural network models have seen rapid development and are extensively used to study the stability of these networks. Among various adversarial strategies, Projected Gradient Descent (PGD) is a widely adopted method in computer vision due to its effectiveness and quick implementation, making it suitable for adversarial training. In this work, we observe that in many cases, the perturbations computed using PGD predominantly affect only a portion of the singular value spectrum of the original image, suggesting that these perturbations are approximately low-rank. Motivated by this observation, we propose a variation of PGD that efficiently computes a low-rank attack. We extensively validate our method on a range of standard models as well as robust models that have undergone adversarial training. Our analysis indicates that the proposed low-rank PGD can be effectively used in adversarial training due to its straightforward and fast implementation coupled with competitive performance. Notably, we find that low-rank PGD often performs comparably to, and sometimes even outperforms, the traditional full-rank PGD attack, while using significantly less memory.

Subjects: Machine Learning ; Artificial Intelligence ; Numerical Analysis ; Machine Learning

Publish: 2024-10-16 14:24:51 UTC

#21 Self-Supervised Learning of Disentangled Representations for Multivariate Time-Series [PDF1] [Copy] [Kimi1] [REL]

Authors: Ching Chang ; Chiao-Tung Chan ; Wei-Yao Wang ; Wen-Chih Peng ; Tien-Fu Chen

Multivariate time-series data in fields like healthcare and industry are informative but challenging due to high dimensionality and lack of labels. Recent self-supervised learning methods excel in learning rich representations without labels but struggle with disentangled embeddings and inductive bias issues like transformation-invariance. To address these challenges, we introduce TimeDRL, a framework for multivariate time-series representation learning with dual-level disentangled embeddings. TimeDRL features: (i) disentangled timestamp-level and instance-level embeddings using a [CLS] token strategy; (ii) timestamp-predictive and instance-contrastive tasks for representation learning; and (iii) avoidance of augmentation methods to eliminate inductive biases. Experiments on forecasting and classification datasets show TimeDRL outperforms existing methods, with further validation in semi-supervised settings with limited labeled data.

Subjects: Machine Learning ; Artificial Intelligence

Publish: 2024-10-16 14:24:44 UTC

#22 The Bayesian Confidence (BACON) Estimator for Deep Neural Networks [PDF] [Copy] [Kimi] [REL]

Authors: Patrick D. Kee ; Max J. Brown ; Jonathan C. Rice ; Christian A. Howell

This paper introduces the Bayesian Confidence Estimator (BACON) for deep neural networks. Current practice of interpreting Softmax values in the output layer as probabilities of outcomes is prone to extreme predictions of class probability. In this work we extend Waagen's method of representing the terminal layers with a geometric model, where the probability associated with an output vector is estimated with Bayes' Rule using validation data to provide likelihood and normalization values. This estimator provides superior ECE and ACE calibration error compared to Softmax for ResNet-18 at 85% network accuracy, and EfficientNet-B0 at 95% network accuracy, on the CIFAR-10 dataset with an imbalanced test set, except for very high accuracy edge cases. In addition, when using the ACE metric, BACON demonstrated improved calibration error when estimating probabilities for the imbalanced test set when using actual class distribution fractions.

Subject: Machine Learning

Publish: 2024-10-16 14:23:36 UTC

#23 Dynamic Learning Rate for Deep Reinforcement Learning: A Bandit Approach [PDF] [Copy] [Kimi] [REL]

Authors: Henrique Donâncio ; Antoine Barrier ; Leah F. South ; Florence Forbes

In Deep Reinforcement Learning models trained using gradient-based techniques, the choice of optimizer and its learning rate are crucial to achieving good performance: higher learning rates can prevent the model from learning effectively, while lower ones might slow convergence. Additionally, due to the non-stationarity of the objective function, the best-performing learning rate can change over the training steps. To adapt the learning rate, a standard technique consists of using decay schedulers. However, these schedulers assume that the model is progressively approaching convergence, which may not always be true, leading to delayed or premature adjustments. In this work, we propose dynamic Learning Rate for deep Reinforcement Learning (LRRL), a meta-learning approach that selects the learning rate based on the agent's performance during training. LRRL is based on a multi-armed bandit algorithm, where each arm represents a different learning rate, and the bandit feedback is provided by the cumulative returns of the RL policy to update the arms' probability distribution. Our empirical results demonstrate that LRRL can substantially improve the performance of deep RL algorithms.

Subject: Machine Learning

Publish: 2024-10-16 14:15:28 UTC

#24 Personalized Prediction Models for Changes in Knee Pain among Patients with Osteoarthritis Participating in Supervised Exercise and Education [PDF] [Copy] [Kimi] [REL]

Authors: M. Rafiei ; S. Das ; M. Bakhtiari ; E. M. Roos ; S. T. Skou ; D. T. Grønne ; J. Baumbach ; L. Baumbach

Knee osteoarthritis (OA) is a widespread chronic condition that impairs mobility and diminishes quality of life. Despite the proven benefits of exercise therapy and patient education in managing the OA symptoms pain and functional limitations, these strategies are often underutilized. Personalized outcome prediction models can help motivate and engage patients, but the accuracy of existing models in predicting changes in knee pain remains insufficiently examined. To validate existing models and introduce a concise personalized model predicting changes in knee pain before to after participating in a supervised education and exercise therapy program (GLA:D) for knee OA patients. Our models use self-reported patient information and functional measures. To refine the number of variables, we evaluated the variable importance and applied clinical reasoning. We trained random forest regression models and compared the rate of true predictions of our models with those utilizing average values. We evaluated the performance of a full, continuous, and concise model including all 34, all 11 continuous, and the six most predictive variables respectively. All three models performed similarly and were comparable to the existing model, with R-squares of 0.31-0.32 and RMSEs of 18.65-18.85 - despite our increased sample size. Allowing a deviation of 15 VAS points from the true change in pain, our concise model and utilizing the average values estimated the change in pain at 58% and 51% correctly, respectively. Our supplementary analysis led to similar outcomes. Our concise personalized prediction model more accurately predicts changes in knee pain following the GLA:D program compared to average pain improvement values. Neither the increase in sample size nor the inclusion of additional variables improved previous models. To improve predictions, new variables beyond those in the GLA:D are required.

Subject: Machine Learning

Publish: 2024-10-16 14:15:01 UTC

#25 Expand and Compress: Exploring Tuning Principles for Continual Spatio-Temporal Graph Forecasting [PDF] [Copy] [Kimi1] [REL]

Authors: Wei Chen ; Yuxuan Liang

The widespread deployment of sensing devices leads to a surge in data for spatio-temporal forecasting applications such as traffic flow, air quality, and wind energy. Although spatio-temporal graph neural networks have achieved success in modeling various static spatio-temporal forecasting scenarios, real-world spatio-temporal data are typically received in a streaming manner, and the network continuously expands with the installation of new sensors. Thus, spatio-temporal forecasting in streaming scenarios faces dual challenges: the inefficiency of retraining models over newly arrived data and the detrimental effects of catastrophic forgetting over long-term history. To address these challenges, we propose a novel prompt tuning-based continuous forecasting method, following two fundamental tuning principles guided by empirical and theoretical analysis: expand and compress, which effectively resolve the aforementioned problems with lightweight tuning parameters. Specifically, we integrate the base spatio-temporal graph neural network with a continuous prompt pool, utilizing stored prompts (i.e., few learnable parameters) in memory, and jointly optimize them with the base spatio-temporal graph neural network. This method ensures that the model sequentially learns from the spatio-temporal data stream to accomplish tasks for corresponding periods. Extensive experimental results on multiple real-world datasets demonstrate the multi-faceted superiority of our method over the state-of-the-art baselines, including effectiveness, efficiency, universality, etc.

Subjects: Machine Learning ; Artificial Intelligence

Publish: 2024-10-16 14:12:11 UTC