Skip to main content

    Valentina De Simone

    Abstract We analyze the convergence of an infeasible inexact potential reduction method for quadratic programming problems. We show that the convergence of this method is achieved if the residual of the KKT system satisfies a bound... more
    Abstract We analyze the convergence of an infeasible inexact potential reduction method for quadratic programming problems. We show that the convergence of this method is achieved if the residual of the KKT system satisfies a bound related to the duality gap. This result ...
    ABSTRACT We introduce a linear semi-implicit complementary volume numerical scheme for solving level-set-like nonlinear diffusion equations arising in plane curve evolution driven by curvature and anisotropy. The scheme is L ∞ and W 1,1... more
    ABSTRACT We introduce a linear semi-implicit complementary volume numerical scheme for solving level-set-like nonlinear diffusion equations arising in plane curve evolution driven by curvature and anisotropy. The scheme is L ∞ and W 1,1 stable and the efficiency is given by its linearity. Incomplete Cholesky preconditioners are used for computing rapidly the linear systems which arise. Computational results related to anisotropic mean curvature motion in a plane are presented.
    In this paper, we address the problem of long-term investment by exploring optimal strategies for allocating wealth among a finite number of assets over multiple periods. Based on the classical Markowitz mean-variance philosophy, we... more
    In this paper, we address the problem of long-term investment by exploring optimal strategies for allocating wealth among a finite number of assets over multiple periods. Based on the classical Markowitz mean-variance philosophy, we develop a new portfolio optimization framework which can produce sparse portfolios. The sparsity of the portfolio at each and across periods is characterized by the possibly nonconvex penalties. For the constructed nonconvex and nonsmooth constrained model, we propose a generalized alternating direction method of multipliers and its global convergence to a stationary point can be guaranteed theoretically. Moreover, some numerical experiments are conducted on several datasets generated from practical applications to illustrate the effectiveness and advantage of the proposed model and solving method.
    We investigate the use of Bregman iteration method for the solution of the portfolio selection problem, both in the single and in the multi-period case. Our starting point is the classical Markowitz mean-variance model, properly extended... more
    We investigate the use of Bregman iteration method for the solution of the portfolio selection problem, both in the single and in the multi-period case. Our starting point is the classical Markowitz mean-variance model, properly extended to deal with the multi-period case. The constrained optimization problem at the core of the model is typically ill-conditioned, due to correlation between assets. We consider l1-regularization techniques to stabilize the solution process, since this has also relevant financial interpretations.
    ABSTRACT This paper focuses on the development of mathematical software for Total Variation (TV)-based deblurring on advanced architectures. TV methods are very effective for recovering blocky images from noisy and blurred data; however,... more
    ABSTRACT This paper focuses on the development of mathematical software for Total Variation (TV)-based deblurring on advanced architectures. TV methods are very effective for recovering blocky images from noisy and blurred data; however, the high complexity of the discrete problem requires a significant computational cost. The Time Marching (TM) and Fixed Point (FP) method with and without preconditioning are considered and compared in terms of computational costs, running times, quality of reconstruction and parallel efficiency. The reported results suggest that although preconditioning improves the convergence of FP, its computational cost deteriorates the performance in terms of the execution time and the parallel efficiency. On the other hand, TM and FP without preconditioner are more suitable for the parallel implementation and require less running time for the same reconstruction accuracy.
    Image segmentation is a central topic in image processing and computer vision and a key issue in many applications, e.g., in medical imaging, microscopy, document analysis and remote sensing. According to the human perception, image... more
    Image segmentation is a central topic in image processing and computer vision and a key issue in many applications, e.g., in medical imaging, microscopy, document analysis and remote sensing. According to the human perception, image segmentation is the process of dividing an image into non-overlapping regions. These regions, which may correspond, e.g., to different objects, are fundamental for the correct interpretation and classification of the scene represented by the image. The division into regions is not unique, but it depends on the application, i.e., it must be driven by the final goal of the segmentation and hence by the most significant features with respect to that goal. Thus, image segmentation can be regarded as a strongly ill-posed problem. A classical approach to deal with ill posedness consists in incorporating in the model a-priori information about the solution, e.g., in the form of penalty terms. In this work we provide a brief overview of basic computational model...
    Two-region image segmentation is the process of dividing an image into two regions of interest, i.e., the foreground and the background. To this aim, Chan et al. (SIAM J Appl Math 66(5):1632–1648, 2006) designed a model well suited for... more
    Two-region image segmentation is the process of dividing an image into two regions of interest, i.e., the foreground and the background. To this aim, Chan et al. (SIAM J Appl Math 66(5):1632–1648, 2006) designed a model well suited for smooth images. One drawback of this model is that it may produce a bad segmentation when the image contains oscillatory components. Based on a cartoon-texture decomposition of the image to be segmented, we propose a new model that is able to produce an accurate segmentation of images also containing noise or oscillatory information like texture. The novel model leads to a non-smooth constrained optimization problem which we solve by means of the ADMM method. The convergence of the numerical scheme is also proved. Several experiments on smooth, noisy, and textural images show the effectiveness of the proposed model.
    Image segmentation is a central topic in image processing and computer vision and a key 1 issue in many applications, e.g., in medical imaging, microscopy, document analysis and remote 2 sensing. According to the human perception, image... more
    Image segmentation is a central topic in image processing and computer vision and a key 1 issue in many applications, e.g., in medical imaging, microscopy, document analysis and remote 2 sensing. According to the human perception, image segmentation is the process of dividing an 3 image into non-overlapping regions. These regions, which may correspond to different objects, are 4 fundamental for the correct interpretation and classification of the scene represented by the image. 5 The division into regions is not unique, but it depends on the application, i.e., it must be driven by 6 the final goal of the segmentation and hence by the most significant features with respect to that 7 goal. Image segmentation is an inherently ill-posed problem. A classical approach to deal with ill 8 posedness consists in the use of regularization, which allows us to incorporate in the model a-priori 9 information about the solution. In this work we provide a brief overview of regularized mathematical ...
    Preconditioned iterative methods provide an effective alternative to direct methods for the solution of the KKT linear systems arising in Interior Point algorithms, especially when large-scale problems are considered. We analyze the... more
    Preconditioned iterative methods provide an effective alternative to direct methods for the solution of the KKT linear systems arising in Interior Point algorithms, especially when large-scale problems are considered. We analyze the behaviour of a Constraint Preconditioner coupled with Krylov solvers, in a Potential Reduction (PR) framework. We present also adaptive stopping criteria for the inner iterations that relate the accuracy in the solution of the KKT system to the quality of the current PR iterate, to increase the overall computational efficiency. Numerical experiments on a set of large-scale problems show the effectiveness of this approach. [ DOI : 10.1685 / CSC06031] About DOI
    We present a total-variation-regularized image segmentation model that uses local regularization parameters to take into account spatial image information. We propose some techniques for defining those parameters, based on the... more
    We present a total-variation-regularized image segmentation model that uses local regularization parameters to take into account spatial image information. We propose some techniques for defining those parameters, based on the cartoon-texture decomposition of the given image, on the mean and median filters, and on a thresholding technique, with the aim of preventing excessive regularization in piecewise-constant or smooth regions and preserving spatial features in nonsmooth regions. Our model is obtained by modifying a well-known image segmentation model that was developed by T. Chan, S. Esedoḡlu, and M. Nikolova. We solve the modified model by an alternating minimization method using split Bregman iterations. Numerical experiments show the effectiveness of our approach.
    Updating preconditioners for the solution of sequences of large and sparse saddle- point linear systems via Krylov methods has received increasing attention in the last few years, because it allows to reduce the cost of preconditioning... more
    Updating preconditioners for the solution of sequences of large and sparse saddle- point linear systems via Krylov methods has received increasing attention in the last few years, because it allows to reduce the cost of preconditioning while keeping the efficiency of the overall solution process. This paper provides a short survey of the two approaches proposed in the literature for this problem: updating the factors of a preconditioner available in a block LDLT form, and updating a preconditioner via a limited-memory technique inspired by quasi-Newton methods.
    Research Interests:
    Much progress has been made in the last decade in building more and more powerful parallel computers, but still limited is the impact of such progress on the field of nonlinear optimization. One of the possible ways to exploit parallelism... more
    Much progress has been made in the last decade in building more and more powerful parallel computers, but still limited is the impact of such progress on the field of nonlinear optimization. One of the possible ways to exploit parallelism is to build efficient parallel linear algebra kernels to be used in nonlinear optimization codes. This paper presents some design and computational experiences in modifying the Cholesky factorization of a dense symmetric positive definite matrix on MIMD distributed memory machines; this problem is of special interest since it represents the computational kernel of many optimization algorithms. Our overall conclusions are that such approach can be worth for medium to large size problems, with a modest processors number.
    Many data analysis problems can be modeled as a constrained optimization problem characterized by nonsmooth functionals, often because of the presence of ℓ1-regularization terms. One of the most effective ways to solve such problems is... more
    Many data analysis problems can be modeled as a constrained optimization problem characterized by nonsmooth functionals, often because of the presence of ℓ1-regularization terms. One of the most effective ways to solve such problems is through the Alternate Direction Method of Multipliers (ADMM), which has been proved to have good theoretical convergence properties even if the arising subproblems are solved inexactly. Nevertheless, experience shows that the choice of the parameter τ penalizing the constraint violation in the Augmented Lagrangian underlying ADMM affects the method’s performance. To this end, strategies for the adaptive selection of such parameter have been analyzed in the literature and are still of great interest. In this paper, starting from an adaptive spectral strategy recently proposed in the literature, we investigate the use of different strategies based on Barzilai–Borwein-like stepsize rules. We test the effectiveness of the proposed strategies in the soluti...
    In this work, we investigate the application of Deep Learning in Portfolio selection in a Markowitz mean-variance framework. We refer to a l1 regularized multi-period model; the choice of the l1 norm aims at producing sparse solutions. A... more
    In this work, we investigate the application of Deep Learning in Portfolio selection in a Markowitz mean-variance framework. We refer to a l1 regularized multi-period model; the choice of the l1 norm aims at producing sparse solutions. A crucial issue is the choice of the regularization parameter, which must realize a trade-off between fidelity to data and regularization. We propose an algorithm based on neural networks for the automatic selection of the regularization parameter. Once the neural network training is completed, an estimate of the regularization parameter can be computed via forward propagation. Numerical experiments and comparisons performed on real data validate the approach.
    We consider the l1-regularized Markowitz model, where a l1-penalty term is added to the objective function of the classical mean-variance one to stabilize the solution process, promoting sparsity in the solution and avoiding short... more
    We consider the l1-regularized Markowitz model, where a l1-penalty term is added to the objective function of the classical mean-variance one to stabilize the solution process, promoting sparsity in the solution and avoiding short positions. In this paper, we consider the Bregman iteration method to solve the related constrained optimization problem. We propose an iterative algorithm based on a modified Bregman iteration, in which an adaptive updating rule for the regularization parameter is defined. Our main result shows that the modified scheme preserves the properties of the original one. Numerical tests are reported, which show the effectiveness of the our approach.
    We present a new technique for building effective and low cost preconditioners for sequences of shifted linear systems (A + αI)xα = b, where A is symmetric positive definite and α > 0. This technique updates a preconditioner for A,... more
    We present a new technique for building effective and low cost preconditioners for sequences of shifted linear systems (A + αI)xα = b, where A is symmetric positive definite and α > 0. This technique updates a preconditioner for A, available in the form of an LDL T factorization, by modifying only the nonzero entries of the L factor in such a way that the resulting preconditioner mimics the diagonal of the shifted matrix and reproduces its overall behaviour. The proposed approach is supported by a theoretical analysis as well as by numerical experiments, showing that it works efficiently for a broad range of values of α.
    preconditioning framework for sequences of diagonally