Jonathan M. Borwein
Commemorative Conference
25—29 September, 2017
Applied Analysis, Optimisation & Convex Functions►
Theme chaired by Regina Burachik and Guoyin Li
Keynote talk: Local Elicitation of Maximal Monotonicity in Solving Generalized Equations and Problems of Optimization


An extension of this approach is in fact possible to an important setting in which $T$ is not maximal monotone but a partial inverse version of local maximal monotonicity can be elicited at a solution $(\bar x,\bar w)$ by a kind of augmentation. Local convergence is guaranteed then if the starting point is close enough to that solution. In optimization this is intimately related to the convexifying properties of augmented Lagrangian functions.
Uniform continuity of the product of real functions defined on a metric space
As is well-known, the pointwise product of two bounded real-valued uniformly continuous functions $f$ and $g$ defined on a metric space is uniformly continuous, but this condition is
hardly necessary: let $f(x) = g(x) = \sqrt{x}$ for $x \ge 0$. In joint work with Som Naimpally that appeared in Real Analysis Exchange, we produced necessary and sufficient conditions
on a pair of uniformly continuous real-valued functions $(f,g)$ defined on an arbitrary metric space so that their pointwise product is uniformly continuous. Our conditions are
sufficient for any pair of real-valued functions, and are necessary for a class of pairs properly containing the uniformly continuous pairs. We precisely identify this class and
describe it in various ways.
A different but equally interesting question is this: what are necessary and sufficient conditions on the internal structure of a metric space so that the product of each pair of uniformly continuous real-valued functions remains uniformly continuous?
In joint work with Maribel Garrido and Ana Merono appearing in Set-Valued and Variational Analysis and dedicated to Jon Borwein, we show that this occurs exactly when the familiar bornology of Bourbaki bounded subsets coincides with a new larger bornology.
A different but equally interesting question is this: what are necessary and sufficient conditions on the internal structure of a metric space so that the product of each pair of uniformly continuous real-valued functions remains uniformly continuous?
In joint work with Maribel Garrido and Ana Merono appearing in Set-Valued and Variational Analysis and dedicated to Jon Borwein, we show that this occurs exactly when the familiar bornology of Bourbaki bounded subsets coincides with a new larger bornology.
An approach for the convex feasibility problem via Monotropic Programming
We use recent zero duality gap results arising from the Monotropic Programming problem for analyzing consistency of the convex feasibility problem in Hilbert spaces. We characterize
consistency in terms of the lower semicontinuity of the infimal convolution of the associated support functions.
Authors: Regina Burachik, Victoria Martín-Márquez
Authors: Regina Burachik, Victoria Martín-Márquez
Optimal Control with Minimum Total Variation
We consider optimal control problems where the aim is to minimize the total variation in the control variables in addition to a general objective functional, resulting in a multi-objective optimal control problem.
We study an illustrative convex problem, which appears in a simple form, and obtain asymptotic results for the challenging case when only the total variation is to be minimized. We also discuss problems which are in a more general form
Computing Radius of Robust Feasibility of Uncertain Linear Conic Programs via Semidefinite Programs
The radius of robust feasibility provides a numerical value for the largest possible uncertainty set that guarantees robust feasibility of an uncertain linear conic program. This determines when the robust feasible set is non-empty.
Otherwise the robust counterpart of an uncertain program is not well-defined as a robust optimization problem. In this talk, we address a key fundamental question of robust optimization: How to compute the radius of robust feasibility of uncertain linear conic programs, including linear programs? We first provide computable lower and upper bounds for the radius of robust feasibility for general uncertain linear conic programs under the commonly used ball uncertainty set. We then provide classes of linear conic programs where the bounds are calculated by finding the optimal values of related semidefinite linear programs (SDPs). The classes, in particular, include the important classes of uncertain SDPs and second-order cone programs. Then, we present an exact formula for the radius of robust feasibility for a class of uncertain linear conic programs. We show that in the case of an uncertain linear program the formula allows us to calculate the radius by finding the optimal value of an associated second-order cone program.
As an application, we show how the well-studied distance to ill-posedness of parametric linear conic systems can be calculated using the radius of robust feasibility. This is done by providing formulas for the lower and upper bounds and a new exact formula for the distance to ill-posedness
Meta-optimisation: Lower bounds for higher faces

On the Subdi§erential and Recession Function of the Fitzpatrick Function

Optimal Backward Error for ODE

Joint work with Yalcin Kaya, Robert HC Moir, and Julia Jankowski.
Come to Order

Upon learning of the early 1980's results by Maurey [2] and Alspach [1] concerning the spaces $c_0$ and $L_1[0,1]$, Jon sensed underlying order theoretic arguments were in play [2]. I will briefly describe my interactions with Jon and survey results for Banach latices that his insight led to, as well as the new geometric conditions that these in turn inspired.
- Dale E. Alspach, A fixed point free nonexpansive map, Proc. Amer. Math. Soc., 82 (1981), 423-424
- J. Borwein and B. Sims, Non-expansive mappings on Banach lattices and related topics}, Houston J. of Math., 10 (1984), 339-355
- B. Maurey, Points fixes des contractions sur un convexe forme de L1, Seminaire d'Analyse Fonctionnelle 80-81, Exposk No. VIII, Ecole Polytechnique, Palaiseau.
A Lyapunov-type approach to convergence of the Douglas--Rachford algorithm
The Douglas-Rachford projection algorithm is an iterative method used to find a point in the intersection of closed constraint sets. The algorithm has been experimentally observed
to solve various nonconvex feasibility problems which current theory cannot sufficiently explain. In this talk, we discuss convergence of the Douglas--Rachford algorithm in a potentially
nonconvex setting. Our analysis relies on the existence of a Lyapunov-type functional whose convexity properties are not tantamount to convexity of the original constraint sets. Benoist
(2015) solved a conjecture of Borwein and Sims (2011) on the existence of such a functional for a specific nonconvex feasibility problem. Our work can be viewed as a generalization of
Benoist's approach. This is based on joint work with Matthew Tam (University of Göttingen).
Detecting the Convexity of a Function and Related Issues

Recently Ahmadi and Parillo (2013) has also answered a long standing conjecture of Shor, which said that it is NP-hard to detect the convexity of a polynomial function of degree four and above. They showed the NP-hardness of detecting the convexity of polynomials by reducing the problem of a biquadratic optimization to this problem. We use an approach based on algebraic geometry to discuss this issue.
In the last section we collect some other notion of convexity, like Schur convexity, geodesic convexity and matrix convexity. We consider this article/talk as a survey on the issue of convexity detection and the exciting challenges it throws at us.
J. M. Borwein and A. S. Lewis, Convex Analysis and Nonlinear Optimization, 2nd Edition, Springer 2006
Ahmadi, Amir Ali; Olshevsky, Alex; Parrilo, Pablo A.; Tsitsiklis, John N. NP-hardness of deciding convexity of quartic polynomials and related problems. Math. Program. 137 (2013), no. 1-2, Ser. A, 453-476
An Abstract Variational Theorem

Theorem Let $f:X \to \mathbb{R}\cup\{\infty\}$ be a proper function on a Banach space $(X, \|\cdot\|)$. If there exists a nonempty open subset $A$ of $\mathrm{Dom}(f^*)$ such that $\mathrm{argmax}(x^*-f) \not= \emptyset$ for each $x^* \in A$, then there exists a dense and $G_\delta$ subset $R'$ of $A$ such that $(x^*-f):X \to \mathbb{R} \cup\{-\infty\}$ has a strong maximum for each $x^* \in R'$. In addition, if $0 \in A$ and $\varepsilon >0$ then there exists an $x_0^* \in X^*$ with $\|x^*_0\| < \varepsilon$ such that $(x_0^*-f):X \to \mathbb{R} \cup\{-\infty\}$ has a strong maximum.
We will also present some applications of this theorem and compare it to Stegall's variational principle.
Numerical methods for nonmonotone contact problems in continuum mechanics
We present several robust and efficient numerical methods for non-convex, non-smooth variational problems in nonmonotone contact. Examples include nonmonotone friction and adhesive
contact problems, delamination and crack propagation in adhesive bonding of composite structures. A challenging problem is adhesive bonding in case of contamination. The non-smoothness
comes from the non-smooth data of the problems itself, in particular from nonmonotone, multivalued physical laws involved in the boundary conditions. The nonmonotone laws can be
expressed by means of the Clarke's subdifferential of a maximum, minimum or nested maximum-minimum functions. The weak formulation of the resulting boundary value problems leads to a
class of non-smooth variational inequalities, the so-called hemivariational inequalities (HVIs). The latter maybe viewed as a first order condition of a non-convex, non-smooth
minimization problem. These problems are much harder to analyze and solve than the classical variational inequality problems like Signorini contact or Tresca-frictional problems. The
resulting HVI problem is first regularized and then discretized by either finite element or boundary element methods. Another approach to solve nonsmooth variational problems is by the
strategy: first discretized by finite elements, then optimize using finite dimensional non-smooth optimization methods. Various numerical experiments illustrate the behavior, the
strength and the limitations of the proposed approximation schemes.
On lexicographic tangents

I will introduce the notions of lexicographic differentiation and tangents, and focus on characterisations of facial dual completeness property of convex cones.
The talk is based on joint work with Prof. Levent Tunçel, University of Waterloo.
Innovations in AskConstants version 2
The AskConstants program was inspired by the free online Inverse
Symbolic Calculator program developed by a team including Jon and
Peter Borwein, Simon Plouffe, David Bailey, Nathan Singer, Andrew
Shouldice, Lingyun Ye, Tomas Daske, Peter Dobcsanyi, Dante Manna,
and O-Yeat Chan. If you have not experienced this program, then visit
that website for a delightful surprise.
The reason for developing another such program was that I needed a function rather than a program, and I wanted it written in Mathematica so that I could easily use it for another Mathematica package. AskConstants version 1 was based on the integer null vector method described in Alan Meichsner's M.S. thesis and later transformed by Kevin Hare to the Maple identify... function.
Since then I added the table lookup method that the Inverse Symbolic Calculator uses in addition to integer null vectors. But rather than using a few hundred binary searches in one very large sorted table, AskConstants uses bidirectional search using two smaller tables of approximately the same length $n$. The backward table is an uninstantiated set of univariate expressions together with one of their inverses with respect to one of the operators or functions therein. Given an input float, the inverse expressions and corresponding float values are instantiated, then sorted, followed by a collinear search. This permits assessment of $n^{2}$ expressions in time $\Theta(n\log n)$ and space $\Theta(n)$.
Also, sorting the tables by their unsigned base-2 significands permits the program to omit all but the simplest of entries that differ in magnitude by a power of 2, then infer the appropriate sign and power of 2 to include in the result, thus making the coverage much larger than $n^{2}$ expressions.
The presentation will demonstrate AskConstants and describe its techniques. I plan to post at AskConstants.org shortly before the conference a freely downloadable version 2 including these performance improvements and many interface improvements. Meanwhile, please email me challenging non-float constants and floats of unknown identity to try!
The reason for developing another such program was that I needed a function rather than a program, and I wanted it written in Mathematica so that I could easily use it for another Mathematica package. AskConstants version 1 was based on the integer null vector method described in Alan Meichsner's M.S. thesis and later transformed by Kevin Hare to the Maple identify... function.
Since then I added the table lookup method that the Inverse Symbolic Calculator uses in addition to integer null vectors. But rather than using a few hundred binary searches in one very large sorted table, AskConstants uses bidirectional search using two smaller tables of approximately the same length $n$. The backward table is an uninstantiated set of univariate expressions together with one of their inverses with respect to one of the operators or functions therein. Given an input float, the inverse expressions and corresponding float values are instantiated, then sorted, followed by a collinear search. This permits assessment of $n^{2}$ expressions in time $\Theta(n\log n)$ and space $\Theta(n)$.
Also, sorting the tables by their unsigned base-2 significands permits the program to omit all but the simplest of entries that differ in magnitude by a power of 2, then infer the appropriate sign and power of 2 to include in the result, thus making the coverage much larger than $n^{2}$ expressions.
The presentation will demonstrate AskConstants and describe its techniques. I plan to post at AskConstants.org shortly before the conference a freely downloadable version 2 including these performance improvements and many interface improvements. Meanwhile, please email me challenging non-float constants and floats of unknown identity to try!