convex case
Recently Published Documents


TOTAL DOCUMENTS

86
(FIVE YEARS 8)

H-INDEX

14
(FIVE YEARS 0)

Author(s):  
Gabriela Kováčová ◽  
Birgit Rudloff

AbstractIn this paper we consider a problem, called convex projection, of projecting a convex set onto a subspace. We will show that to a convex projection one can assign a particular multi-objective convex optimization problem, such that the solution to that problem also solves the convex projection (and vice versa), which is analogous to the result in the polyhedral convex case considered in Löhne and Weißing (Math Methods Oper Res 84(2):411–426, 2016). In practice, however, one can only compute approximate solutions in the (bounded or self-bounded) convex case, which solve the problem up to a given error tolerance. We will show that for approximate solutions a similar connection can be proven, but the tolerance level needs to be adjusted. That is, an approximate solution of the convex projection solves the multi-objective problem only with an increased error. Similarly, an approximate solution of the multi-objective problem solves the convex projection with an increased error. In both cases the tolerance is increased proportionally to a multiplier. These multipliers are deduced and shown to be sharp. These results allow to compute approximate solutions to a convex projection problem by computing approximate solutions to the corresponding multi-objective convex optimization problem, for which algorithms exist in the bounded case. For completeness, we will also investigate the potential generalization of the following result to the convex case. In Löhne and Weißing (Math Methods Oper Res 84(2):411–426, 2016), it has been shown for the polyhedral case, how to construct a polyhedral projection associated to any given vector linear program and how to relate their solutions. This in turn yields an equivalence between polyhedral projection, multi-objective linear programming and vector linear programming. We will show that only some parts of this result can be generalized to the convex case, and discuss the limitations.


New Astronomy ◽  
2021 ◽  
pp. 101697
Author(s):  
Prachi Sachan ◽  
Md Sanam Suraj ◽  
Rajiv Aggarwal ◽  
Md Chand Asique ◽  
Amit Mittal

2021 ◽  
Vol 4 ◽  
Author(s):  
Zhanhong Jiang ◽  
Aditya Balu ◽  
Chinmay Hegde ◽  
Soumik Sarkar

In distributed machine learning, where agents collaboratively learn from diverse private data sets, there is a fundamental tension between consensus and optimality. In this paper, we build on recent algorithmic progresses in distributed deep learning to explore various consensus-optimality trade-offs over a fixed communication topology. First, we propose the incremental consensus-based distributed stochastic gradient descent (i-CDSGD) algorithm, which involves multiple consensus steps (where each agent communicates information with its neighbors) within each SGD iteration. Second, we propose the generalized consensus-based distributed SGD (g-CDSGD) algorithm that enables us to navigate the full spectrum from complete consensus (all agents agree) to complete disagreement (each agent converges to individual model parameters). We analytically establish convergence of the proposed algorithms for strongly convex and nonconvex objective functions; we also analyze the momentum variants of the algorithms for the strongly convex case. We support our algorithms via numerical experiments, and demonstrate significant improvements over existing methods for collaborative deep learning.


Author(s):  
Qiuchen Zhang ◽  
Jing Ma ◽  
Jian Lou ◽  
Li Xiong

We study the differentially private (DP) stochastic nonconvex optimization with a focus on its under-studied utility measures in terms of the expected excess empirical and population risks. While the excess risks are extensively studied for convex optimization, they are rarely studied for nonconvex optimization, especially the expected population risk. For the convex case, recent studies show that it is possible for private optimization to achieve the same order of excess population risk as to the nonprivate optimization under certain conditions. It still remains an open question for the nonconvex case whether such ideal excess population risk is achievable. In this paper, we progress towards an affirmative answer to this open problem: DP nonconvex optimization is indeed capable of achieving the same excess population risk as to the nonprivate algorithm in most common parameter regimes, under certain conditions (i.e., well-conditioned nonconvexity). We achieve such improved utility rates compared to existing results by designing and analyzing the stagewise DP-SGD with early momentum algorithm. We obtain both excess empirical risk and excess population risk to achieve differential privacy. Our algorithm also features the first known results of excess and population risks for DP-SGD with momentum. Experiment results on both shallow and deep neural networks when respectively applied to simple and complex real datasets corroborate the theoretical results.


Author(s):  
Robert Denk ◽  
Michael Kupper ◽  
Max Nendel

AbstractIn this paper, we investigate convex semigroups on Banach lattices with order continuous norm, having $$L^p$$ L p -spaces in mind as a typical application. We show that the basic results from linear $$C_0$$ C 0 -semigroup theory extend to the convex case. We prove that the generator of a convex $$C_0$$ C 0 -semigroup is closed and uniquely determines the semigroup whenever the domain is dense. Moreover, the domain of the generator is invariant under the semigroup, a result that leads to the well-posedness of the related Cauchy problem. In a last step, we provide conditions for the existence and strong continuity of semigroup envelopes for families of $$C_0$$ C 0 -semigroups. The results are discussed in several examples such as semilinear heat equations and nonlinear integro-differential equations.


2020 ◽  
Vol 24 ◽  
pp. 770-788
Author(s):  
Alejandro Cholaquidis ◽  
Antonio Cuevas

A set in the Euclidean plane is said to be biconvex if, for some angle θ ∈ [0, π∕2), all its sections along straight lines with inclination angles θ and θ + π∕2 are convex sets (i.e., empty sets or segments). Biconvexity is a natural notion with some useful applications in optimization theory. It has also be independently used, under the name of “rectilinear convexity”, in computational geometry. We are concerned here with the problem of asymptotically reconstructing (or estimating) a biconvex set S from a random sample of points drawn on S. By analogy with the classical convex case, one would like to define the “biconvex hull” of the sample points as a natural estimator for S. However, as previously pointed out by several authors, the notion of “hull” for a given set A (understood as the “minimal” set including A and having the required property) has no obvious, useful translation to the biconvex case. This is in sharp contrast with the well-known elementary definition of convex hull. Thus, we have selected the most commonly accepted notion of “biconvex hull” (often called “rectilinear convex hull”): we first provide additional motivations for this definition, proving some useful relations with other convexity-related notions. Then, we prove some results concerning the consistent approximation of a biconvex set S and the corresponding biconvex hull. An analogous result is also provided for the boundaries. A method to approximate, from a sample of points on S, the biconvexity angle θ is also given.


2019 ◽  
Vol 266 (1) ◽  
pp. 179-201 ◽  
Author(s):  
Laiyuan Gao ◽  
Yuntao Zhang
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document