scholarly journals Subdivision Depth Computation for Tensor Productn-Ary Volumetric Models

2011 ◽  
Vol 2011 ◽  
pp. 1-22 ◽  
Author(s):  
Ghulam Mustafa ◽  
Muhammad Sadiq Hashmi

We offer computational formula of subdivision depth for tensor productn-ary (n⩾2) volumetric models based on error bound evaluation technique. This formula provides and error control tool in subdivision schemes over regular hexahedron lattice in higher-dimensional spaces. Moreover, the error bounds of Mustafa et al. (2006) are special cases of our bounds.

10.37236/6516 ◽  
2018 ◽  
Vol 25 (3) ◽  
Author(s):  
Megumi Asada ◽  
Ryan Chen ◽  
Florian Frick ◽  
Frederick Huang ◽  
Maxwell Polevy ◽  
...  

Reay's relaxed Tverberg conjecture and Conway's thrackle conjecture are open problems about the geometry of pairwise intersections. Reay asked for the minimum number of points in Euclidean $d$-space that guarantees any such point set admits a partition into $r$ parts, any $k$ of whose convex hulls intersect. Here we give new and improved lower bounds for this number, which Reay conjectured to be independent of $k$. We prove a colored version of Reay's conjecture for $k$ sufficiently large, but nevertheless $k$ independent of dimension $d$. Pairwise intersecting convex hulls have severely restricted combinatorics. This is a higher-dimensional analogue of Conway's thrackle conjecture or its linear special case. We thus study convex-geometric and higher-dimensional analogues of the thrackle conjecture alongside Reay's problem and conjecture (and prove in two special cases) that the number of convex sets in the plane is bounded by the total number of vertices they involve whenever there exists a transversal set for their pairwise intersections. We thus isolate a geometric property that leads to bounds as in the thrackle conjecture. We also establish tight bounds for the number of facets of higher-dimensional analogues of linear thrackles and conjecture their continuous generalizations.


1975 ◽  
Vol 78 (2) ◽  
pp. 301-307 ◽  
Author(s):  
Simon Wassermann

A deep result in the theory of W*-tensor products, the Commutation theorem, states that if M and N are W*-algebras faithfully represented as von Neumann algebras on the Hilbert spaces H and K, respectively, then the commutant in L(H ⊗ K) of the W*-tensor product of M and N coincides with the W*-tensor product of M′ and N′. Although special cases of this theorem were established successively by Misonou (2) and Sakai (3), the validity of the general result remained conjectural until the advent of the Tomita-Takesaki theory of Modular Hilbert algebras (6). As formulated, the Commutation theorem is a spatial result; that is, the W*-algebras in its statement are taken to act on specific Hilbert spaces. Not surprisingly, therefore, known proofs rely heavily on techniques of representation theory.


2008 ◽  
Vol 9 (2) ◽  
pp. 195-200 ◽  
Author(s):  
Péter Baranyi ◽  
Zoltén Petres ◽  
Péter Korondi ◽  
Yeung Yam ◽  
Hideki Hashimoto

2020 ◽  
Vol 45 (3) ◽  
pp. 966-992
Author(s):  
Michael Jong Kim

Sequential Bayesian optimization constitutes an important and broad class of problems where model parameters are not known a priori but need to be learned over time using Bayesian updating. It is known that the solution to these problems can in principle be obtained by solving the Bayesian dynamic programming (BDP) equation. Although the BDP equation can be solved in certain special cases (for example, when posteriors have low-dimensional representations), solving this equation in general is computationally intractable and remains an open problem. A second unresolved issue with the BDP equation lies in its (rather generic) interpretation. Beyond the standard narrative of balancing immediate versus future costs—an interpretation common to all dynamic programs with or without learning—the BDP equation does not provide much insight into the underlying mechanism by which sequential Bayesian optimization trades off between learning (exploration) and optimization (exploitation), the distinguishing feature of this problem class. The goal of this paper is to develop good approximations (with error bounds) to the BDP equation that help address the issues of computation and interpretation. To this end, we show how the BDP equation can be represented as a tractable single-stage optimization problem that trades off between a myopic term and a “variance regularization” term that measures the total solution variability over the remaining planning horizon. Intuitively, the myopic term can be regarded as a pure exploitation objective that ignores the impact of future learning, whereas the variance regularization term captures a pure exploration objective that only puts value on solutions that resolve statistical uncertainty. We develop quantitative error bounds for this representation and prove that the error tends to zero like o(n-1) almost surely in the number of stages n, which as a corollary, establishes strong consistency of the approximate solution.


2019 ◽  
Vol 181 (2) ◽  
pp. 473-507 ◽  
Author(s):  
E. Ruben van Beesten ◽  
Ward Romeijnders

Abstract In traditional two-stage mixed-integer recourse models, the expected value of the total costs is minimized. In order to address risk-averse attitudes of decision makers, we consider a weighted mean-risk objective instead. Conditional value-at-risk is used as our risk measure. Integrality conditions on decision variables make the model non-convex and hence, hard to solve. To tackle this problem, we derive convex approximation models and corresponding error bounds, that depend on the total variations of the density functions of the random right-hand side variables in the model. We show that the error bounds converge to zero if these total variations go to zero. In addition, for the special cases of totally unimodular and simple integer recourse models we derive sharper error bounds.


1989 ◽  
Vol 31 (1) ◽  
pp. 17-29 ◽  
Author(s):  
N. D. Gilbert ◽  
P. J. Higgins

The tensor product of two arbitrary groups acting on each other was introduced by R. Brown and J.-L. Loday in [5, 6]. It arose from consideration of the pushout of crossed squares in connection with applications of a van Kampen theorem for crossed squares. Special cases of the product had previously been studied by A. S.-T. Lue [10] and R. K. Dennis [7]. The tensor product of crossed complexes was introduced by R. Brown and the second author [3] in connection with the fundamental crossed complex π(X) of a filtered space X, which also satisfies a van Kampen theorem. This tensor product provides an algebraic description of the crossed complex π(X ⊗ Y) and gives a symmetric monoidal closed structure to the category of crossed complexes (over groupoids). Both constructions involve non-abelian bilinearity conditions which are versions of standard identities between group commutators. Since any group can be viewed as a crossed complex of rank 1, a close relationship might be expected between the two products. One purpose of this paper is to display the direct connections that exist between them and to clarify their differences.


2011 ◽  
Vol 10 (01) ◽  
pp. 129-155 ◽  
Author(s):  
ROBERT WISBAUER

Any (co)ring R is an endofunctor with (co)multiplication on the category of abelian groups. These notions were generalized to monads and comonads on arbitrary categories. Starting around 1970 with papers by Beck, Barr and others a rich theory of the interplay between such endofunctors was elaborated based on distributive laws between them and Applegate's lifting theorem of functors between categories to related (co)module categories. Curiously enough some of these results were not noticed by researchers in module theory and thus notions like entwining structures and smash products between algebras and coalgebras were introduced (in the nineties) without being aware that these are special cases of the more general theory. The purpose of this survey is to explain several of these notions and recent results from general category theory in the language of elementary module theory focusing on functors between module categories given by tensoring with a bimodule. This provides a simple and systematic approach to smash products, wreath products, corings and rings over corings (C-rings). We also highlight the relevance of the Yang–Baxter equation for the structures on the threefold tensor product of algebras or coalgebras (see 3.6).


2006 ◽  
Vol 83 (12) ◽  
pp. 879-903 ◽  
Author(s):  
Ghulam Mustafa ◽  
Sadiq Hashmi ◽  
Nusrat Anjum Noshi

Author(s):  
Jiaqin Chen ◽  
Vadim Shapiro ◽  
Krishnan Suresh ◽  
Igor Tsukanov

We propose a novel approach to shape optimization that combines and retains the advantages of the earlier optimization techniques. The shapes in the design space are represented implicitly as level sets of a higher-dimensional function that is constructed using B-splines (to allow free-form deformations), and parameterized primitives combined with R-functions (to support desired parametric changes). Our approach to shape design and optimization offers great flexibility because it provides explicit parametric control of geometry and topology within a large space of freeform shapes. The resulting method is also general in that it subsumes most other types of shape optimization as special cases. We describe an implementation of the proposed technique with attractive numerical properties. The effectiveness of the method is demonstrated by several numerical examples.


Sign in / Sign up

Export Citation Format

Share Document