A continuation of last week's talk.
We consider multi-objective convex optimal control problems. First we state a relationship between the (weakly or properly) efficient set of the multi-objective problem and the solution of the problem scalarized via a convex combination of objectives through a vector of parameters (or weights). Then we establish that (i) the solution of the scalarized (parametric) problem for any given parameter vector is unique and (weakly or properly) efficient and (ii) for each solution in the (weakly or properly) efficient set, there exists at least one corresponding parameter vector for the scalarized problem yielding the same solution. Therefore the set of all parametric solutions (obtained by solving the scalarized problem) is equal to the efficient set. Next we consider an additional objective over the efficient set. Based on the main result, the new objective can instead be considered over the (parametric) solution set of the scalarized problem. For the purpose of constructing numerical methods, we point to existing solution differentiability results for parametric optimal control problems. We propose numerical methods and give an example application to illustrate our approach. This is joint work with Henri Bonnel (University of New Caledonia).
This talk will introduce fractal transformations and some of their remarkable properties. I will explain the mathematics that sustains them and how to construct them in simple cases. In particular I hope to demonstrate a very recent result, showing how they can be applied to generate convenient mutually-singular measures that enable the storage of multiple images within a single image. The talk will include some beautiful computer graphics.
I will describe four recent theorems, developed jointly with Andrew Vince and David C. Wilson (both of the University of Florida) that reveal a surprisingly rich theory associated with an attractor of a projective iterated function system (IFS). The first theorem characterizes when a projective IFS has an attractor that avoids a hyperplane. The second theorem establishes that a projective IFS has at most one attractor. In the third theorem the classical duality between points and hyperplanes in projective space leads to connections between attractors that avoid hyperplanes and repellers that avoid points and an associated index, which is a nontrivial projective invariant, is defined. I will link these results to the Conley decomposition theorem.
Hyperbolic geometry will be introduced, and visualised via the Cinderella software suite. Simple constructions will be performed and compared and contrasted to with Euclidean geometry. Constructions and examples will be quite elementary. Audience participation, specifically suggestions for constructions to attempt, during the demonstration is actively encouraged.The speaker apologises in advance for not being nearly as knowledgeable of the subject as he probably ought to be.
Scenario Trees are compact representations of processes by which information becomes available. They are most commonly used when solving stochastic optimization problems with recourse, but they have many other uses. In this talk we discuss two uses of scenario trees: computing the value of hydroelectricity in a regulated market; and updating parameters of epidemiological models based on observations of syndromic data. In the former, we investigate the impact that the size and shape of the tree has on the dual price of the first stage demand constraint. In the latter, we summarize a simulation of epidemics and syndromic behaviors on a tree, then identify the subtree most likely to match observed data.
Mineral freight volume increases are driving transport infrastructure investments on Australia's east and west coasts. New and upgraded railways, roads and ports are planned or are under construction -- to serve new mines, processing facilities and international markets. One of the fastest growing regions is Northern Queensland, central to which is the so-called Northern Economic Triangle that has Rockhampton, Mt Isa and Townsville at its vertices. CSIRO has been working with Queensland Government to construct a new GIS-based infrastructure planning optimisation system that is known as the Infrastructure Futures Analysis Platform (IFAP). IFAP can be used to build long-term (eg. 25 year) plans for infrastructure development in regions such as the Northern Economic Triangle. IFAP consists of a commercial Geographic Information System (MapInfo), a database and a network optimisation solver that has been constructed by CSIRO and will ultimately by open-sourced. The prototype IFAP is nearing completion and in this presentation I will discuss the development process and the underlying network optimisation problem.
Joint work with Kim Levy, Andreas Ernst, Gaurav Singh, Stuart Woodman, Andrew Higgins, Leorey Marquez, Olena Gavriliouk and Dhananjay Thiruvady
Nonconvex/nonsmooth phenomena appear naturally in many complex systems. In static systems and global optimization problems, the nonconvexity usually leads to multi-solutions in the related governing equations. Each of these solutions represents certain possible state of the system. How to identify the global and local stability and extremality of these critical solutions is a challenge task in nonconvex analysis and global optimization. The classical Lagrangian-type methods and the modern Fenchel-Moreau-Rockafellar duality theories usually produce the well-known duality gap. It turns out that many nonconvex problems in global optimization and computational science are considered to be NP-hard. In nonlinear dynamics, the so-called chaotic behavior is mainly due to nonconvexity of the objective functions. In nonlinear variational analysis and partial differential equations, the existence of nonsmooth solutions has been considered as an outstanding open problem.
In this talk, the speaker will present a potentially useful canonical duality theory for solving a class of optimization and control problems in complex systems. Starting from a very simple cubic nonlinear equation, the speaker will show that the optimal solutions for nonconvex systems are usually nonsmooth and cannot be captured by traditional local analysis and Newton-type methods. Based on the fundamental definitions of the objectivity and isotropy in continuum physics, the canonical duality theory is naturally developed, and can be used for solving a large class of nonconvex/nonsmooth/discrete problems in complex systems. The results illustrate the important fact that smooth analytic or numerical solutions of a nonlinear mixed boundary-value problem might not be minimizers of the associated variational problem. From a dual perspective, the convergence (or non-convergence) of the FDM is explained and numerical examples are provided. This talk should bring some new insights into nonconvex analysis, global optimization, and computational methods.
We shall continue from last week, performing more hyperbolic constructions, as well as some elliptic constructions for comparison. In particular, some alternating projection algorithms will be explored in hyperbolic space. With some luck we shall confound some more of our euclidean intuitions. Audience participation is actively encouraged.
Machine learning problems are a particularly rich source of applications for sparse optimization, giving rise to a number of formulations that require specialized solvers and structured, approximate solutions. As case studies, we discuss two such applications - sparse SVM classification and sparse logistic regression - and present algorithms that are assembled from different components, including stochastic gradient methods, random approximate matrix factorizations, block coordinate descent, and projected Newton methods. We also describe a third (distantly related) application to selection of captive breeding populations for endangered species using binary quadratic programming, a project started during a visit to Newcastle in June 2009.
CARMA reflects changes in the mathematical research being undertaken at Newcastle. Mathematics is "the language of high technology" which underpins all facets of modern life, while current Information and Communication Technology (ICT) has become ubiquitous. No other research centre exists which focuses on the implications of developments in ICT, present and future, for the practice of research mathematics. CARMA fills this gap through the exploitation and development of techniques and tools for computer-assisted discovery and disciplined data-mining including mathematical visualization. Advanced mathematical computation is equally essential to solution of real-world problems; sophisticated mathematics forms the core of software used by decision-makers, engineers, scientists, managers and those who design, plan and control the products and systems which are key to present-day life.
This talk is aimed at being accessible to all.
Sebastian will be presenting Lagrangian Relaxation and the Cost Splitting Dual on Monday July 5 at 4pm. He will discuss when the Cost Splitting Dual can provide a better bound than the Lagrangian Relaxation or regular linear relaxation and apply the theory learned to an example.
In Euclidean space the medians of a triangle meet at a point that divides each median in the ratio 2 to 1. That point is called the centroid. Cinderella tells us that the medians of a triangle in hyperbolic space meet at a point, but the medians do not divide each other in any fixed ratio. What characterises that point? One answer is that it is the centre of mass of equal-mass particles placed at the vertices. I will outline how one can define the centre of mass of a set of particles (points) in a Riemannina manifold, and how one can understand this in terms of the exponential map. This centre of mass, or geometric mean, is sometimes called the Karcher mean (apparently first introduced by Cartan!). I will attempt to show what this tells us about the medians of a triangle.
In this talk we will present some results recently obtained in collaboration with B.F.Svaiter on maximal monotone operators in nonreflexive Banach spaces. The focus will be on the use of concept of convex representation of a maximal monotone operator for obtaining results on these operators of type: surjectivity of perturbations by duality mappings, uniqueness of the extension to the bidual, Brondsted-Rockafellar property, etc.
Several types of subdifferentials will be introduced on Riemannian manifolds. We'll show their properties and applications, including spectral functions, Borwein-Preiss variational principle, and distance functions.
We study the existence and approximation of fixed points of several types of Bregman nonexpansive operators in reflexive Banach spaces.
Please note the change in our usual day from Monday to Tuesday.
The Hu-Washizu formulation in elasticity is the mother of many different finite element methods in engineering computation. We present some modified Hu-Washizu formulations and their performance in removing locking effect in the nearly incompressible elasticity. The stabilisation of the standard Hu-Washizu formulation is used to obtain the stabilised nodal strain formulation or node-based uniform strain elements. However, we show that standard or stabilised nodal strain formulation should be modified to have a uniformly convergent finite element approximation in the nearly incompressible case.
The classical prolate spheroidal wavefunctions (prolates) arise when solving the Helmholtz equation by separation of variables in prolate spheroidal coordinates. They interpolate between Legendre polynomials and Hermite functions. In a beautiful series of papers published in the Bell Labs Technical Journal in the 1960's, they were rediscovered by Landau, Slepian and Pollak in connection with the spectral concentration problem. After years spent out of the limelight while wavelets drew the focus of mathematicians, physicists and electrical engineers, the popularity of the prolates has recently surged through their appearance in certain communication technologies. In this talk we discuss the remarkable properties of these functions, the ``lucky accident'' which enables their efficient computation, and give details of their role in the localised sampling of bandlimited signals.
In vehicle routing problems (VRPs), a fleet of vehicles must be routed to service the demands of a set of customers in a least-cost fashion. VRPs have been studied extensively by operations researchers for over 50 years. Due to their complexity, VRPs generally cannot be solved optimally, except for very small instances, so researchers have turned to heuristic algorithms that can generate high-quality solutions in reasonable run times. Along these lines, we develop novel integer programming-based heuristics for several different VRPs. We apply our heuristics to benchmark problems in the literature and report computational results to demonstrate their effectiveness.
We discuss a class of sums which involve complex powers of the distance to points in a two-dimensional square lattice and trigonometric functions of their angle. We give a general expression which permits numerical evaluation of members of the class of sums to arbitrary order. We use this to illustrate numerically the properties of trajectories along which the real and imaginary parts of the sums are zero, and we show results for the first two of a particular set of angular sums (denoted C(1, 4 m; s)) which indicate their density of zeros on the critical line of the complex exponent is the same as that for the product (denoted C(0, 1; s)) of the Riemann zeta function and the Catalan beta function. We then introduce a function which is the quotient of the angular lattice sums C(1, 4 m; s) with C(0, 1; s), and use its properties to prove that C(1, 4 m; s) obeys the Riemann hypothesis for any m if and only if C(0, 1; s) obeys the Riemann hypothesis. We furthermore prove that if the Riemann hypothesis holds, then C(1 ,4 m; s) and C(0, 1; s) have the same distribution of zeros on the critical line (in a sense made precise in the proof).
I will describe the history and some recent research on a subject with a remarkable pedigree.
We discuss the Hahn-Banach-Lagrange theorem, a generalized form of the Hahn--Banach theorem. As applications, we derive various results on the existence of linear functionals in functional analysis, on the existence of Lagrange multipliers for convex optimization problems, with an explicit sharp lower bound on the norm of the solutions (multipliers), on finite families of convex functions (leading rapidly to a minimax theorem), on the existence of subgradients of convex functions, and on the Fenchel conjugate of a convex function. We give a complete proof of Rockafellar's version of the Fenchel duality theorem, and an explicit sharp lower bound for the norm of the solutions of the Fenchel duality theorem in terms of elementary geometric concepts.
Computation of definite integrals to high precision (typically several hundred digit precision) has emerged as a particularly fruitful tool for experimental mathematics. In many cases, integrals with no known analytic evaluations have been experimentally evaluated (pending subsequent formal proof) by applying techniques such as the PSLQ integer relation algorithm to the output numerical values. In other cases, intriguing linear relations have been found in a class of related integrals, relations which have subsequently been proven as instances of more general results. In this lecture, Bailey will introduce the two principal algorithms used for high-precision integration, namely Gaussian quadrature and tanh-sinh quadrature, with some details on efficient computer implementations. He will also present numerous examples of new mathematical results obtained, in part, by using these methods.
In a subsequent lecture, Bailey will discuss the PSLQ algorithm and give the details of efficient multi-level and parallel implementations.
Historically, fossil fuels have been vital for our global energy needs. However climate change is prompting renewed interest in the role of fossil fuel production for our energy needs. In order to appropriately plan for our future energy needs, a new detailed model of fossil fuel supply is required. The modelling applied an algorithm-based approach to predict both supply and demand for coal, gas, oil and total fossil fuel resources. Total fossil fuel demand was calculated globally, based on world population and per capita demand; while production was calculated on a country-by-country basis and summed to obtain global production. Notably, production over the lifetime of a fuel source was not assumed to be symmetrical about a peak value like that depicted by a Hubbert curve. Separate production models were developed for mining (coal and unconventional oil) and field (gas and conventional oil) operations, which reflected the basic differences in extraction and processing techniques. Both of these models included a number of parameters that were fitted to historical production data, including: (1) coal for New South Wales, Australia; (2) gas from the North Sea, UK; and (3) oil from the North Sea, UK, and individual state data from the USA.
In this talk we will focus our attention on certain regularization techniques related to two operations involving monotone operators: point-wise sums of maximal monotone operators and pre-compositions of such operators with linear continuous mappings. These techniques, whose underlying idea is to obtain a bigger operator as a result, lead to two concepts of generalized operations--extended and variational sums of maximal monotone operators and, the corresponding to them, extended and variational compositions of monotone mappings with linear continuous operators. We will revue some of the basic results concerning these generalized concepts, as well as will present some recent important advances.
Given a pair of Banach spaces X and Y such that one is the dual of the other, we study the relationships between generic Fr´echet differentiability of convex continuous functions on Y (Asplund property), generic existence of linear perturbations for lower semicontinuous functions on X to have a strong minimum (Stegall variational principle), and dentability of bounded subsets of X (Radon-Nikod´ym Property).
The PSLQ algorithm is an algorithm for finding integer relations in a set of real numbers. In particular, if (x1, ..., xn) is a vector of real numbers, then PSLQ finds integers (a1, ..., an), not all zero, such that a1*x1 + a2*x2 + ... + an*xn = 0, if such integers exist. In practice, PSLQ finds a sequence of matrices B_n such that if x is the original vector, then the reduced vector y = x * B_n tends to have smaller and smaller entries, until one entry is zero (or a very small number commensurate with precision), at which point an integer relation has been detected. PSLQ also produces a sequence of bounds on the size of any possible integer, which bounds grow until either precision is exhausted or a relation has been detected.
The fundamental duality formula (see Zalinescu ”Convex Analysis in General Vector Spaces”, Theorem 2.7.1) is extended to functions mapping into the power set of a topological linear space with a convex cone which is not necessarily pointed. Pairs of linear functionals are used as dual variables instead of linear operators. The talk will consist of three parts. First, motivations and explanations are given for the infimum approach to set-valued optimization. It deviates from other approaches, and it seems to be the only way to obtain a theory which completely resembles the scalar case. In the second part, the main results are presented, namely the fundamental duality formula and several conclusions. The third part deals with duality formulas for set-valued risk measures, a cutting edge development in mathematical finance. It turns out that the proposed duality theory for set-valued functions provides a satisfying framework not only for set-valued risk measures, but also for no-arbitrage and superhedging theorems in conical market models.
An Asplund space is a Banach space which possesses desirable differentiability properties enjoyed by Euclidean spaces. Many characterisations of such spaces fall into two classes: (i) those where an equivalent norm possesses a particular general property, (ii) those where every equivalent norm possesses a particular property at some points of the space. For example: (i) X is an Asplund space if there exists an equivalent norm Frechet differentiable on the unit sphere of the space, (ii) X is an Asplund space if every equivalent norm is Frechet differentiable at some point of its unit sphere. In 1993 (F-P) showed that (i) X is an Asplund space if there exists an equivalent norm strongly subdifferentiable on the unit sphere of the space and in 1995 (G-M-Z) showed that (ii) X separable is an Asplund space if every equivalent norm is strongly subdifferentiable at a nonzero point of X. Problem: Is this last result true for non-separable spaces? In 1994 (C-P) showed (i) X is an Asplund space if there exists an equivalent norm with subdifferential mapping Hausdorff weak upper semicontinuous on its unit sphere. We show: (ii) X is an Asplund space if every continuous gauge on X has a point where its subdifferential mapping is Hausdorff weak upper semicontinuous with weakly compact image which is some way towards solving the problem.
It is an expository talk about (conjectural) hypergeometric evaluations of lattice sums
$F(a,b,c,d)=(a+b+c+d)^2\sum_{n_j=-\infty,\ j=1,2,3,4}^\infty \frac{(-1)^{n_1+n_2+n_3+n_4}}{(a(6n_1+1)^2+b(6n_2+1)^2+c(6n_3+1)^2+d(6n_4+1)^2)^2}$
which arise as the values of L-functions of certain elliptic curves.
Riemannian manifolds constitute a broad and fruitful framework for the development of different fields in mathematic, such as convex analysis, dynamical systems, optimization or mathematical programming, among other scientific areas, where some of its approaches and methods have successfully been extended from Euclidean spaces. The nonpositive sectional curvature is an important property enjoyed by a large class of differential manifolds, so Hadamard manifolds, which are complete simply connected Riemannian manifolds of nonpositive sectional curvature, have worked out a suitable setting for diverse disciplines.
On the other hand, the study of the class of nonexpansive mappings has become an active research area in nonlinear analysis. This is due to the connection with the geometry of Banach spaces along with the relevance of these mappings in the theory of monotone and accretive operators.
We study the problems that arise in the interface between the fixed point theory for nonexpansive type mappings and the theory of monotone operators in the setting of Hadamard manifolds. Different classes of monotone and accretive set-valued vector fields and the relationship between them will be presented, followed by the study of the existence and approximation of singularities for such vector fields. Then we analyze the problem of finding fixed points of nonexpansive type mappings and the connection with monotonicity. As a consequence, variational inequality and minimization problems in this setting will be discussed.
The term "closed form" is one of those mathematical notions that is commonplace, yet virtually devoid of rigor. And, there is disagreement even on the intuitive side; for example, most everyone would say that π + log 2 is a closed form, but some of us would think that the Euler constant γ is not closed. Like others before us, we shall try to supply some missing rigor to the notion of closed forms and also to give examples from modern research where the question of closure looms both important and elusive.
This talk accompanies a paper by Jonathan M. Borwein and Richard E. Crandall, to appear in the Notices of the AMS, which is available at http://www.carma.newcastle.edu.au/~jb616/closed-form.pdf
The term "closed form" is one of those mathematical notions that is commonplace, yet virtually devoid of rigor. And, there is disagreement even on the intuitive side; for example, most everyone would say that π + log 2 is a closed form, but some of us would think that the Euler constant γ is not closed. Like others before us, we shall try to supply some missing rigor to the notion of closed forms and also to give examples from modern research where the question of closure looms both important and elusive.
This talk accompanies a paper by Jonathan M. Borwein and Richard E. Crandall, to appear in the Notices of the AMS, which is available at http://www.carma.newcastle.edu.au/~jb616/closed-form.pdf.
The design of signals with specified frequencies has applications in numerous fields including acoustics, antenna beamforming, digital filters, optics, radar, and time series analysis. It is often desirable to concentrate signal intensity in certain locations and design methods for this have been intensively studied and are well understood. However, these methods assume that the specified frequencies consist of an interval of integers. What happens when this assumption fails is almost a complete mystery that this talk will attempt to address.
In the area of Metric Fixed Point Theory, one of the outstanding question was if the fixed point property implied reflexivity. This question was answered in the negative in 2008 by P.K.Lin, when he showed that certain renorm in the space of absolutely sumable sequences, had the fixed point property.
In this talk we will show a general way to renorm certain spaces in order to have the fixed point property. We also give general properties for a given Banach space to enjoy the f.p.p. And we will also show equivalences of geometrical properties to certain fixed point properties.
There are many algorithms with which linear programs (LPs) can be solved (Fourier-Motzkin, simplex, barrier, ellipsoid, subgradient, bundle, ...). I will provide a very brief review of these methods and their advantages and disadvantages. An LP solver is the main ingredient of every solution method.(branch&bound, cutting planes, column generation, ...) for (NP hard) mixed-integer linear programs (MIPs). What combinations of which techniques work well in practice? There is no general answer. I will show, by means of many practical examples from my research group (telecommunication, transport, traffic and logistics, energy, ...), how large scale LPs and MIPs are successfully attacked today.
The log-concavity of a sequence is a much studied concept in combinatorics with surprising links to many other mathematical fields. In this talk we discuss the stronger but much less studied notion of m-fold log-concavity which has recently recieved some attention after Boros and Moll conjectured that a "remarkable" sequence encountered in the integration of an inverse cubic is infinitely log-concave. In particular, we report on a recent result of Branden which implies infinite log-concavity of the binomial coefficients and other new developments. Examples and conjectures are promised. A PDF of the talk is available here.
Accurate computer recognition of handwritten mathematics offers to provide a natural interface for mathematical computing, document creation and collaboration. Mathematical handwriting, however, provides a number of challenges beyond what is required for the recognition of handwritten natural languages. For example, it is usual to use symbols from a range of different alphabets and there are many similar-looking symbols. Many writers are unfamiliar with the symbols they must use and therefore write them incorrectly. Mathematical notation is two-dimensional and size and placement information is important. Additionally, there is no fixed vocabulary of mathematical "words" that can be used to disambiguate symbol sequences. On the other hand there are some simplifications. For example, symbols do tend to be well-segmented. With these charactersitics, new methods of character recognition are important for accurate handwritten mathematics input.
An informal one-day workshop on Multi Zeta Values will be held on Wed 20th Oct, from 12.30 pm to 6:00 pm. There will be talks by Laureate Professor Jonathan Borwein (Newcastle), Professor Yasuo Ohno (Kinki University, Osaka), and A/Professor Wadim Zudilin (Newcastle), as well as by PhD students from the two universities, followed by a dinner. If you are interested in attending, please inform Juliane Turner Juliane.Turner@newcastle.edu.au so that we can plan for the event.
The Matching Polynomial is a topic in the area of mathematics, statistical physics (dimer-molomer problem) and chemistry (topological resonant energy). In this talk we will discuss the computation of matching polynomial and location of its roots. We show that the roots of matching generating polynomials of graphs are dense in (−∞, 0] and the roots of matching polynomials of graphs dense in (−∞,+∞) which answer a problem of Brown et. al. (see Journal of Algebraic Combinatorics, 19, 273–282, 2004). Some similar result in characteristic polynomial, independent polynomial and chromatic polynomial are also presented.
[Also speaking: Prof Weigen Yan]
We discuss the feasibility pump heuristic and we interpret it as a multi-start, global optimization algorithm that utilizes a fast local minimizer. The function that is minimized has many local minima, some of which correspond to feasible integral solutions. This interpretation suggests alternative ways of incorporating restarts one of which is the use of cutting planes to eliminate local optima that do not correspond to feasible integral solutions. Numerical experiments show encouraging results on standard test libraries.
We are looking at families of finite sets, more specifically subsets of [n]={1,2,...,n}. In particular, we are interested in antichains, that means no member of the family is contained in another one. In this talk we focus on antichains containing only sets of two different cardinalities, say k and l, and study the question what the smallest size of a maximal antichain is (maximal in the sense that it is impossible to add any k-set or l-set without violating the antichain property). This can be nicely reformulated as a problem in extremal (hyper)graph theory, looking similar to the Turán problem on the maximum number of edges in a graph without a complete subgraphs on l vertices. We sketch the solution for the case (k,l)=(2,3), conjecture an optimal construction for the case (k,l)=(2,4) and present some asymptotic bounds for this case.
The continued fraction:
$${\cal R}_\eta(a,b) =\,\frac{{\bf \it a}}{\displaystyle
\eta+\frac{\bf \it b^2}{\displaystyle \eta
+\frac{4{\bf \it a}^2}{\displaystyle \eta+\frac{9 {\bf \it b}^2}{\displaystyle \eta+{}_{\ddots}}}}}$$
enjoys attractive algebraic properties such as a striking arithmetic-geometric mean relation and elegant links with elliptic-function theory. The fraction presents a computational challenge, which we could not resist.
The continued fraction:
$${\cal R}_\eta(a,b) =\,\frac{{\bf \it a}}{\displaystyle \eta+\frac{\bf \it b^2}{\displaystyle \eta +\frac{4{\bf \it a}^2}{\displaystyle \eta+\frac{9 {\bf \it b}^2}{\displaystyle \eta+{}_{\ddots}}}}}$$
enjoys attractive algebraic properties such as a striking arithmetic-geometric mean relation and elegant links with elliptic-function theory. The fraction presents a computational challenge, which we could not resist.
In Part II will reprise what I need from Part I and focus on the dynamics. The talks are stored here.
We will give a brief overview and of the history of Ramanujan and give samplings of areas such as partitions, partition congruences, ranks, modular forms, and mock theta functions. For example: A partition of a positive number $n$ is a non-increasing sequence of positive integers whose sum is $n$. There are five partitions of the number four: 4, 3+1, 2+2, 2+1+1, 1,1,1,1. If we let $p(n)$ be the number of partitions of $n$, it turns out that $p(5n+4)\equiv \pmod{5}$. How does one explain this? Once the basics and context have been introduced, we will discuss new results with respect to mock theta functions and show how they relate to old and recent results.
What is a {\em random subgroup} of a group, and who cares? In a (non-abelian) group based cryptosystem, two parties (Alice and Bob) each choose a subgroup of some platform group "at random" -- each picks $k$ elements "at random" and takes the subgroup generated by their chosen elements.
But for some platform groups (like the braid groups, which were chosen first, being so complicated and difficult) a "random subgroup" is not so random after all. It turned out, pick $k$ elements of a braid group, and they will generate (almost always) a {\em free group} with your $k$ elements as the free basis. And if Alice and Bob are just playing with free groups, it makes their secrets easy to attack.
Richard Thompson's group $F$ is an infinite, torsion free group, with many weird and cool properties, but the one I liked for this project is that it has {\em no} free subgroups (of rank $>1$) at all, so a random subgroup of $F$ could not be free -- so what would it be?
This is joint work with Sean Cleary (CUNY), Andrew Rechnitzer (UBC) and Jeniffer Taback (Bowdoin).
In this talk we consider two-phase flow models in porous media as they occur in several applications like oil production, pollute transport or CO2-storage. After a general introduction, we focus on an enhanced model where the capillary pressure is rate-dependent. We discuss the consequences of this term for heterogeneous materials with and without entry pressure. In the case of entry pressures the problem can be reformulation as inequality constraint at the material interface. Suitable discretization schemes and solution algorithms are proposed and used in various numerical simulations.
The complete elliptic integrals of the first and second kinds (K(x) and E(x)) will be introduced and their key properties revised. Then, new and perhaps interesting results concerning moments and other integrals of K(x) and E(x) will be derived using elementary means. Diverse connections will be made, for instance with random walks and some experimental number theory.
See flyer [PDF]
Automorphism groups of locally finite trees form a significant class of examples of locally compact totally disconnected topological groups. In this talk I will discuss my honours research, which covered the various local properties of automorphism groups. I will provide methods of constructing such groups, in particular groups acting on regular trees, and discuss what conclusions we can make regarding the structure of these groups.
This talk will be an introduction to cycle decompositions of complete graphs in the context of Alspach's conjecture about the necessary and sufficient conditions for their existence. Several useful methods of construction based on algebra, graph products and modifying existing decompositions will be presented. The most up to date results on this problem will be mentioned and future directions of study may be suggested.
This talk will describe a number of dierent variational principles for self-adjoint
eigenvalue problems that arose from considerations of convex and nonlinear analysis.
First some unconstrained variational principles that are smooth analogues of the
classical Rayleigh principles for eigenvalues of symmetric matrices will be described. In
particular the critical points are eigenvectors and their norms are related to the eigenvalues
of the matrix. Moreover the functions have a nice Morse theory with the Morse indices
describing the ordering of the eigenvector.
Next an unconstrained variational principle for eigenfunctions of elliptic operators
will be illustrated for the classical Dirichlet Laplacian eigenproblem. The critical points
of this problems have a Morse theory that plays a similar role to the classical Courant-
Fischer-Weyl minimax theory.
Finally I will describe certain Steklov eigenproblems and indicate how they are used
to develop a spectral characterization of trace spaces of Sobolev fundtions.
Rational numbers can be represented in many different ways: as a fraction, as a M"obius function, as a 2x2 matrix, as a string of L's and R's, as a continued fraction, as a disc in the plane, or as a point in the lattice Z^2. Converting between the representations involves interesting questions about computation and geometry. The geometries that arise are hyperbolic, inversive, or projective.
Various Vacation Scholars, HRD students and CARMA RAs will report on their work. This involves visualization and computation, practice and theory. Everyone is welcome to see what they have done and what they propose to do.
In this talk I will give some classical fixed point theorems and present some of their applications. The talk will be of a "Chalk and Talk" style and will include some elegant classical proofs. The down side of this is that the listener will be expected to have some familiarity with metric spaces, convexity and hopefully Zorn's Lemma.
Various Vacation Scholars, HRD students and CARMA RAs will report on their work. This involves visualization and computation, practice and theory. Everyone is welcome to see what they have done and what they propose to do.
We discuss a short-term revenue optimization problem that involves the optimal targeting of customers for a promotional sale in which a finite number of perishable items are offered on a last-minute offer. The goal is to select the subset of customers to whom the offer will be made available, maximizing the expected return. Each client replies with a certain probability and reports a specific value that depends on the customer type, so that the selected subset has to balance the risk of not selling all the items with the risk of assigning an item to a low value customer. Selecting all those clients with values above a certain optimal threshold may fail to achieve the maximal revenue. However, using a linear programming relaxation, we prove that such threshold strategies attain a constant factor of the optimal value. The achieved factor is ${1\over 2}$ when a single item is to be sold, and approaches 1 as the number of available items grows to infinity. Furthermore, for the single item case, we propose an upper bound based on an exponential size linear program that allows us to get a threshold strategy achieving at least ${2\over 3}$ of the optimal revenue. Computational experiments with random instances show a significantly better performance than the theoretical predictions.
Talk in [PDF]
We describe an integrated model for TCP/IP protocols with multipath routing. The model combines a Network Utility Maximization for rate control, with a Markovian Traffic Equilibrium for routing. This yields a cross-layer design in which sources minimize the expected cost of sending flows, while routers distribute the incoming flow among the outgoing links according to a discrete choice model. We prove the existence of a unique equilibrium state, which is characterized as the solution of an unconstrained strictly convex program of low dimension. A distributed algorithm for solving this optimization problem is proposed, with a brief discussion of how it can be implemented by adapting current Internet protocols.
Talk in [PDF]
Research, as an activity, is fundamentally collaborative in nature. Driven by the massive amounts of data that are produced by computational simulations and high resolution scientific sensors, data-driven collaboration is of particular importance in the computational sciences. In this talk, I will discuss our experiences in designing, deploying, and operating an Canada wide advanced collaboration infrastructure in the support of the computational sciences. In particular, I will focus on the importance of data in such collaborations and discuss how current collaboration tools are sorely lacking in their support of data-centric collaboration.
McCullough-Miller space X = X(W) is a topological model for the outer automorphism group of a free product of groups W. We will discuss the question of just how good a model it is. In particular, we consider circumstances under which Aut(X) is precisely Out(W).
The talk will explain joint work with Yehuda Shalom showing that the only homomorphisms from certain arithmetic groups to totally disconnected, locally compact groups are the obvious, or naturally occurring, ones. For these groups, this extends the supperrigidity theorem that G. Margulis proved for homomorphisms from high rank arithmetic groups to Lie groups. The theorems will be illustrated by referring to the groups $SL_3(\mathbb{Z})$, $SL_2(\mathbb{Z}[\sqrt{2}])$ and $SL_3(\mathbb{Q})$.
CARMA is currently engaged in several shared projects with the IRMACS Centre and the OCANA Group UBC-O, both in British Columbia, Canada. This workshop will be an opportunity to learn about irmacs, Centres and to experience the issues in collaborating for research and teaching across the Pacific.
This will be followed by discussion and illustrations of collaboration, technology, teaching and funding etc.
Cross Pacific Collaboration pages at irmacs.
TBA
This is a discrete mathematics instructional seminar commencing 24 February--to meet on subsequent Thursdays from 3:00-4:00 p.m. The seminar will focus on "classical" papers and portions of books.
"In this talk I'll exhibit the interplay between Selberg integrals (interpreted broadly) and random matrix theory. Here an important role is played by the basic matrix operations of a random corank 1 projection (this reduces the number of nonzero eigenvalues by one) and bordering (this increases the number of eigenvalues by one)."
Concave utility functions and convex risk measures play crucial roles in economic and financial problems. The use of concave utility function can at least be traced back to Bernoulli when he posed and solved the St. Petersburg wager problem. They have been the prevailing way to characterize rational market participants for a long period of time until the 1970’s when Black and Scholes introduced the replicating portfolio pricing method and Cox and Ross developed the risk neutral measure pricing formula. For the past several decades the `new paradigm’ became the main stream. We will show that, in fact, the `new paradigm’ is a special case of the traditional utility maximization and its dual problem. Moreover, the convex analysis perspective also highlights that overlooking sensitivity analysis in the `new paradigm’ is one of the main reason that leads to the recent financial crisis. It is perhaps time again for bankers to learn convex analysis.
The talk will be divided into two parts. In the first part we layout a discrete model for financial markets. We explain the concept of arbitrage and the no arbitrage principle. This is followed by the important fundamental theorem of asset pricing in which the no arbitrage condition is characterized by the existence of martingale (risk neutral) measures. The proof of this gives us a first taste of the importance of convex analysis tools. We then discuss how to use utility functions and risk measures to characterize the preference of market agents. The second part of the talk focuses on the issue of pricing financial derivatives. We use simple models to illustrate the idea of the prevailing Black -Scholes replicating portfolio pricing method and related Cox-Ross risk-neutral pricing method for financial derivatives. Then, we show that the replicating portfolio pricing method is a special case of portfolio optimization and the risk neutral measure is a natural by-product of solving the dual problem. Taking the convex analysis perspective of these methods h
Concave utility functions and convex risk measures play crucial roles in economic and financial problems. The use of concave utility function can at least be traced back to Bernoulli when he posed and solved the St. Petersburg wager problem. They have been the prevailing way to characterize rational market participants for a long period of time until the 1970’s when Black and Scholes introduced the replicating portfolio pricing method and Cox and Ross developed the risk neutral measure pricing formula. For the past several decades the `new paradigm’ became the main stream. We will show that, in fact, the `new paradigm’ is a special case of the traditional utility maximization and its dual problem. Moreover, the convex analysis perspective also highlights that overlooking sensitivity analysis in the `new paradigm’ is one of the main reason that leads to the recent financial crisis. It is perhaps time again for bankers to learn convex analysis.
The talk will be divided into two parts. In the first part we layout a discrete model for financial markets. We explain the concept of arbitrage and the no arbitrage principle. This is followed by the important fundamental theorem of asset pricing in which the no arbitrage condition is characterized by the existence of martingale (risk neutral) measures. The proof of this gives us a first taste of the importance of convex analysis tools. We then discuss how to use utility functions and risk measures to characterize the preference of market agents. The second part of the talk focuses on the issue of pricing financial derivatives. We use simple models to illustrate the idea of the prevailing Black -Scholes replicating portfolio pricing method and related Cox-Ross risk-neutral pricing method for financial derivatives. Then, we show that the replicating portfolio pricing method is a special case of portfolio optimization and the risk neutral measure is a natural by-product of solving the dual problem. Taking the convex analysis perspective of these methods h
Professor Jonathan Borwein shares with us his passion for \(\pi\), taking us on a journey through its rich history. Professor Borwein begins with approximations of \(\pi\) by ancient cultures, and leads us through the work of Archimedes, Newton and others to the calculation of \(\pi\) in today's age of computers.
Professor Borwein is currently Laureate Professor in the School of Mathematical and Physical Sciences at the University of Newcastle. His research interests are broad, spanning pure, applied and computational mathematics and high-performance computing. He is also Chair of the Scientific Advisory Committee at the Australian Mathematical Sciences Institute (AMSI).
This talk will be broadcast from the Access Grid room V206 at the University of Newcastle, and will link to the West coast of Canada.
For more information visit AMSI's Pi Day website or read Jon Borwein's talk.
Co-author: Thomas Prellberg (Queen Mary, University of London)
Various kinds of paths on lattices are often used to model polymers. We describe some partially directed path models for which we find the exact generating functions, using instances of the `kernel method'. In particular, motivated by recent studies of DNA unzipping, we find and analyze the generating function for pairs of non-crossing partially directed paths with contact interactions. Although the expressions involving two-path problem are unweildy and tax the capacities of Maple and Mathematica, we are still able to gain an understanding of the singularities of the generating function which govern the behaviour of the model.
The Mahler measure of a polynomial of several variables has been a subject of much study over the past thirty years. Very few closed forms are proven but more are conjectured. We provide systematic evaluations of various higher and multiple Mahler measures using log-sine integrals. We also explore related generating functions for the log-sine integrals. This work makes frequent use of “The Handbook” and involves extensive symbolic computations.
In the 80's R.Grigorchuk found a finitely generated group such that the number of elements that can be written as a product of at most \(n\) generators grows faster than any polynomial in \(n\), but slower than any exponential in \(n\), so-called "intermediate" growth.
It can be described as an group of automorphisms of an infinite rooted binary tree, or in terms of abstract computing devices called "non-initial finite transducers".
In this talk I will describe what some of these short words/products of generators look like, and speculate on the asymptotic growth rate of all short words of length \(n\).
This is joint unpublished work with Mauricio Gutierrez (Tufts) and Zoran Sunic (Texas A&M).
The Mahler measure of a polynomial of several variables has been a subject of much study over the past thirty years. Very few closed forms are proven but more are conjectured. We provide systematic evaluations of various higher and multiple Mahler measures using log-sine integrals. We also explore related generating functions for the log-sine integrals. This work makes frequent use of “The Handbook” and involves extensive symbolic computations.
I will continue by showing relationships between Mahler measures and logsine integrals [PDF]. This should be comprehensible whether or not you heard Part 1.
This talk will present recent theoretical and experimental results contrasting quantum randomness with pseudo-randomness.
We shall conclude the discussion of some of the mathematics surrounding Birkhoff's Theorem about doubly stochastic matrices.
The stochastic Loewner evolution (SLE) is a one-parameter family of random growth processes in the complex plane introduced by the late Oded Schramm in 1999 which is predicted to describe the scaling limit of a variety of statistical physics models. Recently a number of rigorous results about such scaling limits have been established; in fact, Wendelin Werner was awarded the Fields Medal in 2006 for "his contributions to the development of stochastic Loewner evolution, the geometry of two-dimensional Brownian motion, and conformal field theory" and Stas Smirnov was awarded the Fields Medal in 2010 "for the proof of conformal invariance of percolation and the planar Ising model in statistical physics." In this talk, I will introduce some of these models including the Ising model, self-avoiding walk, loop-erased random walk, and percolation. I will then discuss SLE, describe some of its basic properties, and touch on the results of Werner and Smirnov as well as some of the major open problems in the area. This talk will be "colloquium style" and is intended for a general mathematics audience.
In my talk I will review some recent progress on evaluations of Mahler measures via hypergeometric series and Dirichlet L-series. I will provide more details for the case of the Mahler measure of $1+x+1/x+y+1/y$, whose evaluation was observed by C. Deninger and conjectured by D. Boyd (1997). The main ingredients are relations between modular forms and hypergeometric series in the spirit of Ramanujan. The talk is based on joint work with Mat Rogers.
The demiclosedness principle is one of the key tools in nonlinear analysis and fixed point theory. In this talk, this principle is extended and made more flexible by two mutually orthogonal affine subspaces. Versions for finitely many (firmly) nonexpansive operators are presented. As an application, a simple proof of the weak convergence of the Douglas-Rachford splitting algorithm is provided.
This week we shall start the classical paper by Jack Edmonds and DR Fulkerson on partitioning matroids.
On wednesday afternoon, we will be visited by Dr Stephen Hardy and Dr Kieran Larkin from Canon Information Systems Research Australia, Sydney. Drs Larkin and Hardy will be here to explore research opportunities with University of Newcastle researchers. To familiarise them with what we do and to help us understand what they do, there will be three short talks, giving information on the functions and activities of the CDSC, CARMA and Canon's group of 45 researchers. All are welcome to participate.
You are invited to celebrate the life and work of Paul Erdős!
NUMS and CARMA are holding a "Meet Paul Erdős Night" on Wednesday the 20th April starting at 4pm in V07 and we'd love you to come. You can view a poster with information about the night here.
Please RSVP by next Friday 15th April so that we can cater appropriately. To RSVP, reply to: nums@newcastle.edu.au
The Chaney-Schaefer $\ell$-tensor product $E\tilde{\otimes}_{\ell}Y$ of a Banach lattice $E$ and a Banach space $Y$ may be viewed as an extension of the Bochner space $L^p(\mu,Y) (1\leq p < \infty)$. We consider an extension of a classical martingale characterization of the Radon Nikodým property in $L^p(\mu,Y)$, for $1 < p < 1$, to $E\tilde{\otimes}_{\ell}Y$. We consider consequences of this extension, and time permitting, use it to represent set-valued measures of risk dened on Banach lattice-valued Orlicz hearts.
We introduce, assuming only a modest background in one variable complex analysis, the rudiments of infinite dimensional holomorphy. Approaches and some answers to elementary questions arising from considering monomial expansions in different settings and spaces are used to sample the subject.
We meet this Thursday at the usual time when I will show you a nice application of the Edmonds-Fulkerson matroid partition theorem, namely, I'll prove that Paley graphs have Hamilton decompositions (an unpublished result).
The elements of a free group are naturally considered to be reduced "words" in an certain alphabet. In this context, a palindrome is a group element which reads the same from left-to-right and right-to-left. Certain primitive elements, elements that can be part of a basis for the free group, are palindromes. We discuss these elements, and related automorphisms.
We resolve some recent and fascinating conjectural formulae for $1/\pi$ involving the Legendre polynomials. Our mains tools are hypergeometric series and modular forms, though no prior knowledge of modular forms is required for this talk. Using these we are able to prove some general results regarding generating functions of Legendre polynomials and draw some unexpected number theoretic connections. This is joint work with Heng Huat Chan and Wadim Zudilin. The authors dedicate this paper to Jon Borwein's 60th birthday.
In the late seventies, Bill Thurston defined a semi-norm on the homology of a 3-dimensional manifold which lends itself to the study of manifolds which fibre over the circle. This led him to formulate the Virtual Fibration Conjecture, which is fairly inscrutable and implies almost all major results and conjectures in the field. Nevertheless, Thurston gave the conjecture "a definite chance for a positive answer" and much research is currently devoted to it. I will describe the Thurston norm, its main properties and applications, as well as its relationship to McMullen’s Alexander norm and the geometric invariant for groups due to Bieri, Neumann and Strebel.
The aim of this talk is to demonstrate how cyclic division algebras and their orders can be used to enhance wireless communications. This is done by embedding the information bits to be transmitted into smart algebraic structures, such as matrix representations of order lattices. We will recall the essential algebraic definitions and structures, and further familiarize the audience with the transmission model of fading channels. An example application of major current interest is digital video broadcasting. Examples suitable to this application will be provided.
The Landau-Lifshitz-Gilbert equation (LLGE) comes from a model for the dynamics of the magnetization of a ferromagnetic material. In this talk we will first describe existing finite element methods for numerical solution of the deterministic and stochastic LLGEs. We will then present another finite element solution to the stochastic LLGE. This is a work in progress jointly with B. Goldys and K-N Le.
Let $T$ be a topological space (a compact subspace of ${\mathbb R^m}$, say) and let $C(T)$ be the space of real continuous functions on $T$, equipped with the uniform norm: $||f|| = \text{max}_{t\in T}|f(t)|$ for all $f \in C(T)$. Let $G$ be a finite-dimensional linear subspace of $C(T)$. If $f \in C(T)$ then $$d(f,G) = \text{inf}\{||f−g|| : g \in G\}$$ is the distance of $f$ from $G$, and $$P_G(f) = \{g \in G : ||f−g|| = d(f,G)\}$$ is the set of best approximations to $f$ from $G$. Then $$P_G : C(T) \rightarrow P(G)$$ is the set-valued metric projection of $C(T)$ onto $G$. In the 1850s P. L. Chebyshev considered $T = [a, b]$ and $G$ the space of polynomials of degree $\leq n − 1$. Our concern is with possible properties of $P_G$. The historical development, beginning with Chebyshev, Haar (1918) and Mairhuber (1956), and the present state of knowledge will be outlined. New results will demonstrate that the story is still incomplete.
High-dimensional integrals come up in a number of applications like statistics, physics and financial mathematics. If explicit solutions are not known, one has to resort to approximative methods. In this talk we will discuss equal-weight quadrature rules called quasi-Monte Carlo. These rules are defined over the unit cube $[0,1]^s$ with carefully chosen quadrature points. The quadrature points can be obtained using number-theoretic and algebraic methods and are designed to have low discrepancy, where discrepancy is a measure of how uniformly the quadrature points are distributed in $[0,1]^s$. In the one-dimensional case, the discrepancy coincides with the Kolmogorov-Smirnov distance between the uniform distribution and the empirical distribution of the quadrature points and has also been investigated in a paper by Weyl published in 1916.
The talk will focus on recent results or work in progress, with some open problems which span both Combinatorial Design and Sperner Theory. The work focuses upon the duality between antichains and completely separating systems. An antichain is a collection $\cal A$ of subsets of $[n]=\{1,...,n\}$ such that for any distinct $A,B\in\cal A$, $A$ is not a subset of $B$. A $k$-regular antichain on $[m]$ is an antichain in which each element of $[m]$ occurs exactly $k$ times. A CSS is the dual of an antichain. An $(n,k)CSS \cal C$ is a collection of blocks of size $k$ on $[n]$, such that for each distinct $a,b\in [n]$ there are sets $A,B \in \cal C$ with $a \in A-B$ and $b \in B-A$. The notions of $k$-regular antichains of size $n$ on $[m]$ and $(n,k)CSS$s in $m$ blocks are dual concepts. Natural questions to be considered include: Does a $k$-regular antichain of size $n$ exist on $[m]$? For $k
The concept of orthogonal double covers (ODC) of graphs originates in questions concerning database constraints and problems in statistical combinatorics and in design theory. An ODC of the complete graph $K_n$ by a graph $G$ is a collection of $n$ subgraphs of $K_n$, all isomorphic to $G$, such that any two of them share exactly one edge, and every edge of $K_n$ occurs in exactly two of the subgraphs. We survey some of the main results and conjectures in the area as well as constructions, generalizations and modifications of ODC.
This paper studies combinations of the Riemann zeta function, based on one defined by P.R. Taylor, and shown by him to have all its zeros on the critical line. With a rescaled complex argument, this is denoted here by ${\cal T}_-(s)$, and is considered together with a counterpart function ${\cal T}_+(s)$, symmetric rather than antisymmetric about the critical line. We prove by a graphical argument that ${\cal T}_+(s)$ has all its zeros on the critical line, and that the zeros of both functions are all of first order. We also establish a link between the zeros of ${\cal T}_-(s)$ and of ${\cal T}_+s)$ with zeros of the Riemann zeta function $\zeta(2 s-1)$, and between the distribution functions of the zeros of the three functions.
This talk concerns developing a numerical method of the Newton type to solve systems of nonlinear equations described by nonsmooth continuous functions. We propose and justify a new generalized Newton algorithm based on graphical derivatives, which have never been used to derive a Newton-type method for solving nonsmooth equations. Based on advanced techniques of variational analysis and generalized differentiation, we establish the well-posedness of the algorithm, its local superlinear convergence, and its global convergence of the Kantorovich type. Our convergence results hold with no semismoothness and Lipschitzian assumptions, which is illustrated by examples. The algorithm and main results obtained in the paper are compared with well-recognized semismooth and $B$-differentiable versions of Newton's method for nonsmooth Lipschitzian equations.
One of the most effective avenues in recent experimental mathematics research is the computational of definite integrals to high precision, followed by the identification of resulting numerical values as compact analytic formulas involving well-known constants and functions. In this talk we summarize several applications of this methodology in the realm of applied mathematics and mathematical physics, in particular Ising theory, "box integrals", and the study of random walks.
We will investigate the existence of common fixed points for point-wise Lipschitzian semigroups of nonlinear mappings $Tt : C - C$ where $C$ is a bounded, closed, convex subset of a uniformly convex Banach space $X$, i.e. a family such that $T0(x) = x$, $Ts+t = Ts(Tt(x))$, where each $Tt$ is pointwise Lipschitzian, i.e. there exists a family of functions $at : C - [0;x)$ such that $||Tt(x)-Tt(y)|| < at(x)||x-y||$ for $x$, $y \in C$. We will also demonstrate how the asymptotic aspect of the pointwise Lipschitzian semigroups can be expressed in terms of the respective Frechet derivatives. We will discuss some questions related to the weak and strong convergence of certain iterative algorithms for the construction of the stationary and periodic points for such semigroups.
These talks are aimed at extending the undergraduates' field of vision, or increase their level of exposure to interesting ideas in mathematics. We try to present topics that are important but not covered (to our knowledge) in undergraduate coursework. Because of the brevity and intended audience of the talks, the speaker generally only scratches the surface, concentrating on the most interesting aspects of the topic.
In my talk I will try to overview ideas behind (still recent) achievements on arithmetic properties of numbers $\zeta(s)=\sum_{n=1}^\infty n^{-s}$ for integral $s\ge2$, with more emphasis on odd $s$. The basic ingredients of proofs are generalized hypergeometric functions and linear independence criteria. I will also address some "most recent" results and observations in the subject, as well as connections with other problems in number theory.
This paper considers designing permission sets to influence the project selection decision made by
a better-informed agent. The project characteristics are two-dimensional. The principal can verify the characteristics of the project selected by the agent. However, the principal cannot observe the number and characteristics of those projects that the agent could, but does not, propose. The payoffs to the agent and the principal are different. Using calculus of variations, we solve the optimal permission set, which can be characterized by a threshold function. We obtain comparative statics on the preference alignment and expected number of projects available. When outcome-based incentives are feasible, we discuss the use of financial inducement to maximize the social welfare. We also extend our analysis to two cases: 1) when one of the project characteristics is unobservable; and 2) when there are multiple agents with private preferences and the principal must establish a universal permission set.
Key words: calculus of variations, optimal permission set, project management.
The most important open problem in Monotone Operator Theory concerns the maximal monotonicity of the sum of two maximally monotone operators provided that Rockafellar's constraint qualification holds. In this talk, we prove the maximal monotonicity of the sum of a maximally monotone linear relation and the subdifferential of a proper lower semicontinuous convex function satisfying Rockafellar's constraint qualification. Moreover, we show that this sum operator is of type (FPV).
Infinite index subgroups of integer matrix groups like $SL(n,Z)$ which are Zariski dense in $SL(n)$ arise in geometric diophantine problems (e.g., integral Apollonian packings) as well as monodromy groups associated with families of varieties. One of the key features needed when applying such groups to number theoretic problems is that the congruence graphs associated with these groups are "expanders". We will introduce and explain these ideas and review some recent developments especially those connected with the affine sieve.
It is shown that, for maximally monotone linear relations defined on a general Banach space, the monotonicities of dense type, of negative-infimum type, and of Fitzpatrick-Phelps type are the same and equivalent to monotonicity of the adjoint. This result also provides affirmative answers to two problems: one posed by Phelps and Simons, and the other by Simons.
We continue looking at the 1960 Hoffman-Singleton paper about Moore graphs and related topics.
Given a mixed-up sequence of distinct numbers, say 4 2 1 5 7 3 6, can you pass them through an infinite stack (first-in-last-out) from right to left, and put them in order?
2 1 5 7 3 6 | | | 4 | |___| 1 5 7 3 6 | | | 2 | | 4 | |___| 1 5 7 3 6 | | | 2 | | 4 | |___| 1 2 5 7 3 6 | | | 4 | |___|umm….. This talk will be about this problem - when can you do it with one stack, two stacks (in series), an infinite and a finite capacity stack in series, etc etc? How many permutations of 1,2,...,n are there that can be sorted? The answer will lie in the "forbidden subpatterns" of permutations, and it turns out there is a whole theory of this, which I will try to describe.
20-minute presentation followed by 10 minutes of questions and discussion.
We introduce the concept and several examples of q-analogs. A particular focus is on the q-binomial coefficients, which are characterized in a variety of ways. We recall classical binomial congruences and review their known q-analogs. Finally, we establish a full q-analog of Ljunggren's congruence which states that (a*p choose b*p) is congruent to (a choose b) mod p^3.
This week we shall conclude the proof of the uniqueness of the Hoffman-Singleton graph.
This Thursday is your chance to start anew! I shall be starting a presentation of the best work that has been done on Lovasz's famous 1979 problem (now a conjecture) stating that every connected vertex-transitive graph has a Hamilton path. This is good stuff and requires minimal background.
Basically, a function is Lipschitz continuous if it has a bounded slope. This notion can be extended to set-valued maps in different ways. We will mainly focus on one of them: the so-called Aubin (or Lipschitz-like) property. We will employ this property to analyze the iterates generated by an iterative method known as the proximal point algorithm. Specifically, we consider a generalized version of this algorithm for solving a perturbed inclusion $$y \in T(x),$$ where $y$ is a perturbation element near 0 and $T$ is a set-valued mapping. We will analyze the behavior of the convergent iterates generated by the algorithm and we will show that they inherit the regularity properties of $T$, and vice versa. We analyze the cases when the mapping $T$ is metrically regular (the inverse map has the Aubin property) and strongly regular (the inverse is locally a Lipschitz function). We will not assume any type of monotonicity.
We resolve and further study a sinc integral evaluation, first posed in The American Mathematical Monthly in [1967, p. 1015], which was solved in [1968, p. 914] and withdrawn in [1970, p. 657]. After a short introduction to the problem and its history, we give a general evaluation which we make entirely explicit in the case of the product of three sinc functions. Finally, we exhibit some general structure of the integrals in question.
The topic is Lovasz's Conjecture that all connected vertex-transitive graphs have Hamilton paths.
We are interested in local geometrical properties of a Banach space which are preserved under natural embeddings in all even dual spaces. An example of this behaviour which we generalise is:
if the norm of the space $X$ is Fréchet differentiable at $x \in S(X)$ then the norm of the second dual $X^{**}$ is Fréchet differentiable at $\hat{x}\in S(X)$ and of $X^{****}$ at $\hat{\hat{x}} \in S(X^{****})$ and so on....
The results come from a study of Hausdorff upper semicontinuity properties of the duality mapping characterising general differentiability conditions satisfied by the norm.
One of the most intriguing problems in metric fixed point theory is whether we can find closed convex and unbounded subsets of Banach spaces with the fixed point property. A celebrated theorem due to W.O. Ray in 1980 states that this cannot happen if the space is Hilbert. This problem was so poorly understood that two antagonistic questions were raised: the first one was if this property characterizes Hilbert spaces within the class of Banach spaces, while the second one asked if this property characterizes any space at all, that is, if Ray's theorem states in any Banach space. The second problem is still open but the first one has recently been answered in the negative by T. Domínguez Benavides after showing that Ray's theorem also holds true in the classical space of real sequences $c_0$.
The situation seems, however, to be completely different for CAT(0) spaces. Although Hilbert spaces are a particular class of CAT(0) spaces, there are different examples of CAT(0) spaces, including $\mathbb{R}$-trees, in the literature for which we can find closed convex and unbounded subsets with the fixed point property. In this talk we will look closely at this problem. First, we will introduce a geometrical condition inspired in the Banach-Steinhaus theorem for CAT(0) spaces under which we can still assure that Ray's theorem holds true. We will provide different examples of CAT(0) spaces with this condition but we will notice that all these examples are of a very strong Hilbertian nature. Then we will look at $\delta$-hyperbolic geodesic spaces. If looked from very far these spaces, if unbounded, resemble $\mathbb{R}$-trees, therefore it is natural to try to find convex closed and unbounded subsets with the fixed point property in these spaces. We will present some partial results in this direction.
This talk is based a joint work with Bożena Piątek.
This week the discrete mathematics instructional seminar will continue with a consideration of the Lovasz problem. This is the last meeting of the seminar until 13 October.
In Euclidean geometry, Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). If the quadrilateral is given with its four vertices $A$, $B$, $C$, and $D$ in order, then the theorem states that: $$|AC| \cdot |BD| = |AB| \cdot |CD| + |AD| \cdot |BC|.$$ Furthermore, it is well known that in every Euclidean (or Hilbert) space $H$ we have that $$||x - y|| \cdot ||z - w|| \leq ||x - z|| \cdot ||y - w|| + ||z - y|| \cdot ||x - w||$$ for any four points $w, x, y, z \in H$. This is the classical Ptolemy inequality and it is well-known that it characterizes the inner product spaces among all normed spaces. A Ptolemy metric space is any metric space for which the same inequality holds, replacing norms by distances, for any four points. CAT(0) spaces are geodesic spaces of global nonpositive curvature in the sense of Gromov. Hilbert spaces are CAT(0) spaces and, even more, CAT(0) spaces have many common properties with Hilbert spaces. In particular, although a Ptolemy geodesic metric space need not be CAT(0), any CAT(0) space is a Ptolemy metric space. In this expository talk we will show some recent progress about the connection between Ptolemy metric spaces and CAT(0) spaces.
In this seminar talk we will recall results on the Drop property in Banach spaces and study it in geodesic spaces. In particular, we will show a variant of Danes Drop Theorem in Busemann convex spaces and derive well-posedness results about minimization of convex functions. This talk is based a joint work with Adriana Nicolae.
The choice of a plan for radiotherapy treatment for an individual cancer patient requires the careful trade-off between the goals of delivering a sufficiently high radiation dose to the tumour and avoiding irradiation of critical organs and normal tissue. This problem can be formulated as a multi-objective linear programme (MOLP). In this talk we present a method to compute a finite set of non-dominated points that can be proven to uniformly cover the complete non-dominated set of an MOLP (a finite representation). This method generalises and improves upon two existing methods from the literature. We apply this method to the radiotherapy treatment planning problem, showing some results for clinical cases. We illustrate how the method can be used to support clinician’s decision making when selecting a treatment plan. The treatment planner only needs to specify a threshold for recognising two treatment plans as different and is able to interactively navigate through the representative set without the trial-and-error process often used in practice today.
The talk will begin with a reminder of what a triangulated category is, the context in which they arose, and why we care about them. Then we will discuss the theory of compactly generated triangulated categories, again illustrating the applications. This theory is old and well understood. Finally we will come to well generated categories, where several open problems remain very mysterious.
Noncommutative geometry is based on fairly sophisticated methods: noncommutative C*-algebras are called noncommutative topological spaces, noncommutative von Neumann algebras are noncommutative measure spaces, and Hopf algebras and homological invariants describe the geometry. Standard topology, on the other hand, is based on naive intuitions about discontinuity: a continuous function is one whose graph does not have any gaps, and cutting and gluing are used to analyse and reconstruct geometrical objects. This intuition does not carry over to the noncommutative theory, and the dictum from quantum mechanics that it does not make sense any more to think about point particles perhaps explains a lack of expectation that it should. The talk will describe an attempt to make this transfer by computing the polar decompositions of certain operators in the group C*-algebras of free groups. The computation involves some identities and evaluations of integrals that might interest the audience, and the polar decomposition may be interpreted as a noncommutative version of the double angle formula familiar from high school geometry.
Mean periodic functions of a single real variable were an innovation of the mid-20th century. Although not as well known as almost periodic functions, they have some nice properties, with applications to certain mean value theorems.
We shall start looking at Dave Witte's (now Dave Morris) proof that connected Cayley digraphs of prime power order have Hamilton directed cycles.
This week we shall conclude our look at the paper by Dave Witte on Hamilton directed cycles in Cayley digraphs of prime power order.
I shall describe highlights of my two decades of experience with Advanced Collaborative Environments (ACEs) in Canada and Australia, running shared seminars, conferences, resources and courses over the internet. I shall also describe the AMSI Virtual Lab proposal which has just been submitted to NeCTAR. The slides for much of this talk are at http://www.carma.newcastle.edu.au/jon/aces11.pdf.
We interpret the Hamiltonian Cycle problem (HCP) as a an optimisation problem with the determinant objective function, naturally arising from the embedding of HCP into a Markov decision process. We also exhibit a characteristic structure of the class of all cubic graphs that stems from the spectral properties of their adjacency matrices and provide an analytic explanation of this structure.
We look at (parts of) the survey paper Dependent Random Choice by Jacob Fox and Benny Sudakov: http://arxiv.org/abs/0909.3271
The abstract of the paper says "We describe a simple and yet surprisingly powerful probabilistic technique which shows how to find in a dense graph a large subset of vertices in which all (or almost all) small subsets have many common neighbors. Recently this technique has had several striking applications to Extremal Graph Theory, Ramsey Theory, Additive Combinatorics, and Combinatorial Geometry. In this survey we discuss some of them."
My plan for the seminar is to start with a quick recap of the classics of extremal (hyper)graph theory (i.e. Turan, Ramsey, Ramsey-Turan), then look at some simple examples for the probabilistic method in action, and finally come to the striking applications mentioned in the quoted abstract.
Only elementary probability is required.
The arrival of online casinos in 1996 brought games that you would find at land-based casinos to the computer screens of gamblers all over the world. A major benefit in online casinos is in the automation of systems across several computers for favourable games; as this has the potential to make a significant amount of profit. This article applies this concept to online progressive video poker games. By establishing a set of characteristics to compare different games, analyses are carried out to identify which game should be the starting point for building an automated system. Bankroll management and playing strategies are also analyzed in this article, and are shown to be important components if profiting from online gambling is going to be a long term business.
Within the topic of model-based forecasting with exponential smoothing, this paper seeks to contribute to the understanding of the property of certain stochastic processes to converge almost surely to a constant. It provides a critical discussion of the related views and ideas found in the recent forecasting literature and aims at elucidating the present confusion by review and study of the classical and less known theorems of probability theory and random processes. The paper then argues that a useful role of exponential smoothing for modelling and forecasting sequential count data is limited and methods that are either not based on exponential smoothing or use exponential smoothing in a more flexible way are worthy of exploration. An approach to forecasting such data based on applying exponential smoothing to the probabilities of each count outcome is thus introduced and its merits are discussed in the context of pertinent statistical literature.
In this talk we consider a problem of scheduling several jobs on multiple machines satisfying precedence and resource constraints. Each job has a due date and the objective is to minimize the cumulative weighted tardiness across all jobs. We investigate how to efficiently obtain heuristic solutions on multi-core computers using an Ant Colony Systems framework for the optimisation. The talk will discuss some of the challenges that arise in designing a multi-threaded heuristic and provided computational results for some alternative algorithm variants. The results showing that theACS heuristic is more effective particularly for large problem instances than other methods developed to date.
The relation between mechanics and optimization goes back at least to Euler and was further strengthened by the Lagrangian and Hamiltonian formulations of Newtonian mechanics. Since then, numerous variational formulations of mechanical phenomena have been proposed and although the link to optimization often has been somewhat obscured in the subsequent development of numerical methods, it is in fact as strong as ever. In this talk, I will summarize some of the recent developments in the application of modern mathematical programming methods to problems involving the simulation mechanical phenomena. While the methodology is quite general, emphasis will be on the static and dynamic deformation processes in civil engineering,geomechanics and the earth sciences.
The Feasibility Pump (FP) has proved to be an effective method for finding feasible solutions to Mixed-Integer Programming problems. We investigate the benefits of replacing the rounding procedure with a more sophisticated integer line search that efficiently explores a larger set of integer points with the aim of obtaining an integer feasible solution close to an FP iterate. An extensive computational study on 1000+ benchmark instances demonstrates the effectiveness of the proposed approach.
A common issue when integrating airline planning processes
is the long planning horizon of the crew pairing problem. We propose a
new approach to the crew pairing problem through which we retain a
significant amount of flexibility. This allows us to solve an
integrated aircraft routing, crew pairing, and tail number assignment
problem only few days before the day of operations and with a rolling
planning horizon. The model simultaneously schedules appropriate rest
periods for all crews and maintenance checks for all tail numbers.
A Branch-and-Price method is proposed in which each tail number and
each 'crew block' is formulated as a subproblem.
A water and sewage system, a power grid, a
telecommunication network, are all examples of network
infrastructures. Network infrastructures are a common phenomenon in
many industries. A network infrastructure is characterized by physical
links and connection points. Examples of physical links are pipes
(water and sewage system), fiber optic cables (telecommunication
network), power lines (power grid), and tracks (rail network). Such
network infrastructures have to be maintained and, often, have to be
upgraded or expanded. Network upgrades and expansions typically occur
over a period of time due to budget constraints and other
considerations. Therefore, it becomes important to determine both when
and where network upgrades and expansions should take place so as to
minimize the infrastructure investment as well as current and future
operational costs.
We introduce a class of multi-period network infrastructure expansion
problems that allow us to study the key issues related to the choice
and timing of infrastructure expansions and their impact on the costs
of the activities performed on that infrastructure. We focus on the
simplest variant, an incremental shortest path problem (ISPP). We show
that even ISPP is NP-hard, we introduce a special case that is
polynomially solvable, we derive structural solution properties, we
present an integer programming formulation and classes of valid
inequalities, and discuss the results of a computational study.
The classical single period problem (SPP) has wide applicability especially in service industries which dominates the economy. In this paper a single period production problem, is considered, as a specific type of SPP. The SPPmodel is extended by considering the probability of scrap and rework in production at the beginning and during the period. The optimal solution which maximizes the expected value of total profit obtained. In the case of producing the scrap items and defective items which should rework, the optimal profit of system in comparison to ideal production system reduces. Also, the reduction of profit is more sensitive by increasing the probability of producing scrap items in comparison with the probability of producing defective items. These results would help the managers in order to make the right decision about changing or revising machines or technologies.
In this presentation I briefly discuss practical and philosophical issues related to the role of the peer-review process in maintaining the quality of scientific publications. The discussion is based on, among other things, my experience over the past eight years in containing the spread of voodoo decision theories in Australia. To motivate the discussion, I ask: how do you justify the use a model of local robustness (operating in the neighborhood of a wild guess) to manage Black Swans and Unknown Unknowns?
Design of a complex system needs both micro and macro level competencies to capture the underlying structure of complex problem satisfying convergence to good solution point. Systems such as complex organizations, complex New Product Development (NPD) and complex network of firms (Supply Chains or SC) require competencies at both macro (coordination and integration) and micro (capable designers, teams for NPD and capable firms in SC) entities. Given high complexity in such problems at both macro and micro levels, a couple of different errors can happen at each: 1) Either acceptance of a wrong solution or rejection of a right solution at micro level. 2) Either coordination of a couple of entities that do not need any coordination [e.g. teams or designers working in NPD might put too much time in meetings and firms in SC might lose their flexibility due to limitations from powerful and leader firms in SC] or lack of deployed resources for entities that need coordination [e.g. inconsistencies in decisions made in decentralized systems such as NPD and SC]. In this paper a simple and parsimonious Agent Based Model (ABM) of NK type is build and simulated to study these complex interactive systems. The results of simulations provide some insights on imperfect management of above mentioned complex systems. For instance, we found that asymmetry in any of the above mentioned errors favours a particular policy in management of these systems.
A problem that frequently arises in environmental surveillance is where to place a set of sensors in order to maximise collected information. In this article we compare four methods for solving this problem: a discrete approach based on the classical k-median location model, a continuous approach based on the minimisation of the prediction error variance, an entropy-based algorithm, and simulated annealing. The methods are tested on artificial data and data collected from a network of sensors installed in the Springbrook National Park in Queensland, Australia, for the purpose of tracking the restoration of biodiversity. We present an overview of these methods and a comparison of results.
This talk presents an innovative model for describing the effects of QM on organizational productivity, traditionally researched by statistical models. Learning inside organizations combined with the information processing metaphor of organizations is applied to build a computational model for this research. A reinforcement learning (RL) algorithm is implemented in the computational model to characterize the effects of quality leadership on productivity. The results show that effective quality leadership, being a balanced combination of exploration of new actions and exploitation of previous good actions, outperform pure exploration or exploitation strategies in the long run. However, pure exploitation outperforms the exploration and RL algorithms in the short term. Furthermore, the effects of complexity of customer requirements on productivity are investigated. From the results it can be argued that more complexity usually leads to less productivity. Also, the gap between random action algorithm and RL is reduced when the complexity of customer requirements increases. As regards agent types, it can be inferred that well- balanced business processes comprised of similar agents (in terms of agents’ processing time and accuracy) perform better than other scenarios.
Modular forms have had an important role in number theory for over one hundred years. Modular forms are also of interest in areas such as topology, cryptography and communications network theory. More recently, Peter Sarnak’s August talk, "Chaos, Quantum Mechanics and Number Theory" strongly suggested a link between modular forms and quantum mechanics. In this talk we explain modular forms, in the context of seeking a formula for the number of representations of an integer as the sum of four squares.
We look at (parts of) the survey paper Dependent Random Choice by Jacob Fox and Benny Sudakov: http://arxiv.org/abs/0909.3271. The abstract of the paper says "We describe a simple and yet surprisingly powerful probabilistic technique which shows how to find in a dense graph a large subset of vertices in which all (or almost all) small subsets have many common neighbors. Recently this technique has had several striking applications to Extremal Graph Theory, Ramsey Theory, Additive Combinatorics, and Combinatorial Geometry. In this survey we discuss some of them." My plan for the seminar is to start with a quick recap of the classics of extremal (hyper)graph theory (i.e. Turan, Ramsey, Ramsey-Turan), then look at some simple examples for the probabilistic method in action, and finally come to the striking applications mentioned in the quoted abstract. Only elementary probability is required.
In this talk I attempt to explain a general approach in proving irrationality and linear independence results for q-hypergeometric series. An explicit Pade construction is introduced with some (quantitative) arithmetic implications for well-known q-mathematical constants.
Probability densities are a major tool in exploratory statistics and stochastic modelling. I will talk about a numerical technique for the estimation of a probability distribution from scattered data using exponential families and a maximum a-posteriori approach with Gaussian process priors. Using Cameron-Martin theory, it can be seen that density estimation leads to a nonlinear variational problem with a functional defined on a reproducing kernel Hilbert space. This functional is strictly convex. A dual problem based on Fenchel duality will also be given. The (original) problem is solved using a Newton-Galerkin method with damping for global convergence. In this talk I will discuss some theoretical results relating to the numerical solution of the variational problem and the results of some computational experiments. A major challenge is of course the curse of dimensionality which appears when high-dimensional probability distributions are estimated.
Thomas will be finishing his talks this Thursday where he will finish looking at (parts of) the survey paper Dependent Random Choice by Jacob Fox and Benny Sudakov: http://arxiv.org/abs/0909.3271
We discuss the asymmetric sandwich theorem, a generalization of the Hahn–Banach theorem. As applications, we derive various results on the existence of linear functionals in functional analysis that include bivariate, trivariate and quadrivariate generalizations of the Fenchel duality theorem. We consider both results that use a simple boundedness hypothesis (as in Rockafellar’s version of the Fenchel duality theorem) and also results that use Baire’s theorem (as in the Robinson–Attouch–Brezis version of the Fenchel duality theorem).
Lattice paths effectively model phenomena in chemistry, physics and probability theory. Techniques of analytic combinatorics are very useful in determining asymptotic estimates for enumeration, although asymptotic growth of the number of Self Avoiding Walks on a given lattice is known empirically but not proved. We survey several families of lattice paths and their corresponding enumerative results, both explicit and asymptotic. We conclude with recent work on combinatorial proofs of asymptotic expressions for walks confined by two boundaries.
A Hamilton surface decomposition of a graph is a decomposition of the collection of shortest cycles in such a way that each member of the decomposition determines a surface (with maximum Euler characteristic). Some sufficient conditions for Hamilton surface decomposition of cartesian products of graphs are obtained. Necessary and sufficient conditions are found for the case when factors are even cycles.
The minimal degree of a finite group $G$ is the smallest non-negative integer $n$ such that $G$ embeds in $\Sym(n)$. This defines an invariant of the group $\mu(G)$. In this talk, I will present some interesting examples of calculating $\mu(G)$ and examine how this invariant behaves under taking direct products and homomorphic images.
In particular, I will focus on the problem of determining the smallest degree for which we obtain a strict inequality $\mu(G \times H) < \mu(G) + \mu(H)$, for two groups $G$ and $H$. The answer to this questions also leads us to consider the problem of exceptional permutation groups. These are groups $G$ that possess a normal subgroup $N$ such that $\mu(G/N) > \mu(G)$. They are somewhat mysterious in the sense that a particular homomorphic image becomes 'harder' to faithfully represent than the group itself. I will present some recent examples of exceptional groups and detail recent developments in the 'abelian quotients conjecture' which states that $\mu(G/N) < \mu(G)$, whenever $G/N$ is abelian.
We prove the it is NP-hard for a coalition of two manipulators to compute how to manipulate the Borda voting rule. This resolves one of the last open problems in the computational complexity of manipulating common voting rules. Because of this NP-hardness, we treat computing a manipulation as an approximation problem where we try to minimize the number of manipulators. Based on ideas from bin packing and multiprocessor scheduling, we propose two new approximation methods to compute manipulations of the Borda rule. Experiments show that these methods significantly outperform the previous best known approximation method. We are able to find optimal manipulations in almost all the randomly generated elections tested. Our results suggest that, whilst computing a manipulation of the Borda rule by a coalition is NP-hard, computational complexity may provide only a weak barrier against manipulation in practice.
We also consider Nanson’s and Baldwin’s voting rules that select a winner by successively eliminating candidates with low Borda scores. We theoretically and experimentally demonstrate that these rules are significantly more difficult to manipulate compared to Borda rule. In particular, with unweighted votes, it is NP-hard to manipulate either rule with one manipulator, whilst with weighted votes, it is NP-hard to manipulate either rule with a small number of candidates and a coalition of manipulators.
We consider the problem of packing ellipsoids of different size and shape in an ellipsoidal container so as to minimize a measure of total overlap. The motivating application is chromosome organization in the human cell nucleus. A bilevel optimization formulation is described, together with an algorithm for the general case and a simpler algorithm for the special case in which all ellipsoids are in fact spheres. We prove convergence to stationary points of this nonconvex problem, and describe computational experience. The talk describes joint work with Caroline Uhler (IST, Vienna).
Having been constructed as trading strategies, option spreads are also used in margin calculations for offsetting positions in options. All option spreads that appear in trading and margining practice have two, three or four legs. As shown in Rudd and Schroeder (Management Sci, 1982), the problem of margining option portfolios where option spreads with two legs are used for offsetting can be solved in polynomial time by network flow algorithms. However, spreads with only two legs do not provide sufficient accuracy in measuring risk. Therefore, margining practice also employs spreads with three and four legs. A polynomial-time solution to the extension of the problem where option spreads with three and four legs are also used for offsetting is not known. We propose a heuristic network-flow algorithm for this extension and present a computational study that demonstrates high efficiency of the proposed algorithm in margining practice.
We consider a general class of convex optimization problems in which one seeks to minimize a strongly convex function over a closed and convex set which is by itself an optimal set of another convex problem. We introduce a gradient-based method, called the minimal norm gradient method, for solving this class of problems, and establish the convergence of the sequence generated by the algorithm as well as a rate of convergence of the sequence of function values. A portfolio optimization example is given in order to illustrate our results.
Graph closures became recently an important tool in Hamiltonian Graph Theory since the use of closure techniques often substantially simplifies the structure of a graph under consideration while preserving some of its prescribed properties (usually of Hamiltonian type). In the talk we show basic ideas of construction of some graph closures for claw-free graphs and techniques that allow to reduce the problem to cubic graphs. The approach will be illustrated on a recently introduced closure concept for Hamilton-connectedness in claw-free graphs and, as an application, an asymptotically sharp Ore-type degree condition for Hamilton-connectedness in claw-free graphs will be obtained.
Two sets of functions are studied to ascertain whether they are Stieltjes functions and whether they are completely monotonic. The first group of functions are all built from the Lambert $W$ function. The $W$ function will be reviewed briefly. It will be shown that $W$ is Bernstein and various functions containing $W$ are Stieltjes. Explicit expressions for the Stieltjes transforms are obtained. We also give some new results regarding general Stieltjes functions.
The second set of functions were posed as a challenge by Christian Berg in 2002. The functions are $(1+a/x)^{(x+b)}$ for various $a$ and $b$. We show that the functions is Stieltjes for some ranges of $a,b$ and investigate experimentally complete monotonicity for a larger range. We claim an accurate experimental value for the range.
My co-authors are Rob Corless, Peter Borwein, German Kalugin and Songxin Liang.
Let $K$ be a complete discrete valuation field of characteristic zero with residue field $k_K$ of characteristic $p > 0$. Let $L/K$ be a finite Galois extension with Galois group $G = \text{Gal}(L/K)$ and suppose that the induced extension of residue fields $k_L/k_K$ is separable. Let $W_n(.)$ denote the ring of $p$-typical Witt vectors of length $n$. Hesselholt [Galois cohomology of Witt vectors of algebraic integers, Math. Proc. Cambridge Philos. Soc. 137(3) (2004), 551557] conjectured that the pro-abelian group ${H^1(G,W_n(O_L))}_{n>0}$ is isomorphic to zero. Hogadi and Pisolkar [On the cohomology of Witt vectors of $p$-adic integers and a conjecture of Hesselholt, J. Number Theory 131(10) (2011), 17971807] have recently provided a proof of this conjecture. In this talk, we present a simplified version of the original proof which avoids many of the calculations present in that version.
Integrability theory is the area of mathematics in which methods are developed for the exact solution of partial differential equations, as well as for the study of their properties. We concentrate on PDEs appearing in Physics and other applications. Darboux transformations constitute one of the important methods used in integrability theory and, as well as being a method for the exact solution of linear PDEs, they are an essential part of the method of Lax pairs, used for the solution of non-linear PDEs. A large series of Darboux transformations may be constructed using Wronskians built from some number of individual solutions of the original PDE. In this talk we prove a long-standing conjecture that this construction captures all possible Darboux transformations for transformations of order two, while for transformations of order one the construction captures everything but two Laplace transformations. An introduction into the theory will be provided.
Power line communication has been proposed as a possible solution to the "last mile" problem in telecommunications i.e. providing economical high speed telecommunications to millions of end users. As well as the usual background interference (noise), two other types of noise must also be considered for any successful practical implementation of power line communication. Coding schemes have traditionally been designed to deal only with background noise, and in such schemes it is often assumed that background noise affects symbols in codewords independently at random. Recently, however, new schemes have been proposed to deal with the extra considerations in power line communication. We introduce neighbour transitive codes as a group theoretic analogue to the assumption that background noise affects symbols independently at random. We also classify a family of neighbour transitive codes, and show that such codes have the necessary properties to be useful in power line communication.
We present a technique for enhancing a progressive hedging-based metaheuristic for a network design problem that models demand uncertainty with scenarios. The technique uses machine learning methods to cluster scenarios and, subsequently, the metaheuristic repeatedly solves multi-scenario subproblems (as opposed to single-scenario subproblems as is done in existing work). With a computational study we see that solving multi-scenario subproblems leads to a significant increase in solution quality and that how you construct these multi-scenario subproblems directly impacts solution quality. We also discuss how scenario grouping can be leveraged in a Benders' approach and show preliminary results of its effectiveness. This is joint work with Theo Crainic and Walter Rei at University of Quebec at Montreal.
We start this talk by introducing some basic definitions and properties relative to geodesic in the setting of metric spaces. After showing some important examples of geodesic metric spaces (which will be used through this talk), we shall define the concept of firmly nonexpansive mappings and we shall prove the existence, under mild conditions, of periodic points and fixed points for this class of mappings. Some of these results unify and generalize previous ones. We shall give a result relative to the $\Delta$-convergence to a fixed point of Picard iterates for firmly nonexpansive mappings, which is obtained from the asymptotic regularity of this class of iterates. Moreover, we shall get an effective rate of asymptotic regularity for firmly nonexpansive mappings (this result is new, as far as we know, even in linear spaces). Finally, we shall apply our results to a minimization problem. More precisely, we shall prove the $\Delta$-convergence to a minimizer of a proximal point-like algorithm when applied to a convex proper lower semi-continuous function defined on a CAT(0) space.
The Discrete Mathematics Instructional Seminar will be getting underway again this Thursday.
Parabolic obstacle problems find applications in the financial markets for pricing American put options. We present a mixed and an equivalent variational inequality hp-interior penalty DG (IPDG) method combined with an hp-time DG (TDG) method to solve parabolic obstacle problems approximatively. The contact conditions are resolved by a biorthogonal Lagrange multiplier and are component-wise decoupled. These decoupled contact conditions are equivlent to finding the root of a non-linear complementary function. This non-linear problem can in turn be solved efficiently by a semi-smooth Newton method. For the hp-adaptivity a p-hierarchical error estimator in conjunction with a local analyticity estimate is employed. For the considered stationary problem, this leads to exponential convergence, and for the instationary problem to greatly improved convergence rates. Numerical experiments are given demonstrating the strengths and limitations of the approaches.
Network infrastructures are a common phenomenon. Network upgrades and expansions typically occur over time due to budget constraints. We introduce a class of incremental network design problems that allow investigation of many of the key issues related to the choice and timing of infrastructure expansions and their impact on the costs of the activities performed on that infrastructure. We focus on the simplest variant: incremental network design with shortest paths, and show that even its simplest variant is NP-hard. We investigate structural properties of optimal solutions, we analyze the worst-case performance of natural greedy heuristics, we derive a 4-approximation algorithm, and we present an integer program formulation and conduct a small computational study.
Joint work withSelection theorems assert that one can pick a well behaved function from a corresponding multifunction. They play a very important role in modern optimization theory. I will survey their structure and some applications before sketching some important open research problems.
The celebrated Littlewood conjecture in Diophantine approximation concerns the simultaneous approximation of two real numbers by rationals with the same denominator. A cousin of this conjecture is the mixed Littlewood conjecture of de Mathan and Teulié, which is concerned with the approximation of a single real number, but where some denominators are preferred to others.
In the talk, we will derive a metrical result extending work of Pollington and Velani on the Littlewood conjecture. Our result implies the existence of an abundance of numbers satisfying both conjectures.
Selection theorems assert that one can pick a well behaved function from a corresponding multifunction. They play a very important role in modern optimization theory. In Part I, I will survey their structure and some applications before sketching some important applications and open research problems in Part II.
In this talk, we present a numerical method for a class of generalized inequality constrained integer linear programming (GILP) problems that includes the usual mixed-integer linear programming (MILP) problems as special cases. Instead of restricting certain variables to integer values as in MILP, we require in these GILP problems that some of the constraint functions take integer values. We present a tighten-and-branch method that has a number of advantages over the usual branch-and-cut algorithms. This includes the ability of keeping the number of constraints unchanged for all subproblems throughout the solution process and the capability of eliminating equality constraints. In addition, the method provides an algorithm framework that allows the existing cutting-plane techniques to be incorporated into the tightening process. As a demonstration, we will solve a well-known "hard ILP problem".
Symbolic and numeric computation have been distinguished by definition: numeric computation puts numerical values in its variables as soon as possible, symbolic computation as late as possible. Chebfun blurs this distinction, aiming for the speed of numerics with the generality and flexibility of symbolics. What happens when someone who has used both Maple and Matlab for decades, and has thereby absorbed the different fundamental assumptions into a "computational stance", tries to use Chebfun to solve a variety of computational problems? This talk reports on some of the outcomes.
The Mathematics and Statistics Learning Centre was established at the University of Melbourne over a decade ago, to respond to the needs of, initially, first year students of mathematics and statistics. The role of the centre and its Director has grown. The current Director, Dr Deborah King, will expound upon her role in the Centre.
The modernization of infrastructure networks requires coordinated planning and control. Considering traffic networks and electricity grids raises similar issues on how to achieve substantial new capabilities of effectiveness and efficiency. For instance, power grids need to integrate renewable energy sources and electric vehicles. It is clear that all this can only be achieved by greater reliance on systematic planning in the presence of uncertainty and sensing, communications, computing and control on an unprecedented scale, these days captured in the term "smart grids". This talk will outline current research on planning future grids and control of smart grids. In particular, the possible roles of network science will be emphasized and the challenges arising.
The problem posed by Hilbert in 1900 was resolved in the 1930s independently by A. Gelfond and Th. Schneider. The statement is that $a^b$ is transcendental for algebraic $a \ne 0,1$ and irrational algebraic $b$. The aim of the two 2-hour lectures is to give a proof of this result using the so-called method of interpolation determinants.
In this paper, we construct maximally monotone operators that are not of Gossez's dense-type (D) in many nonreflexive spaces. Many of these operators also fail to possess the Brønsted-Rockafellar (BR) property. Using these operators, we show that the partial inf-convolution of two BC-functions will not always be a BC-function. This provides a negative answer to a challenging question posed by Stephen Simons. Among other consequences, we deduce that every Banach space which contains an isomorphic copy of the James space J or its dual $J^*$, or of $c_0$ or its dual $l^1$ admits a non type (D) operator.
In this talk, we consider the automorphism groups of the Cayley graph with respect to the Coxeter generators and the Davis complex of an arbitrary Coxeter group. We determine for which Coxeter groups these automorphism groups are discrete. In the case where they are discrete, we express them as semidirect products of two obvious families of automorphisms. This extends a result of Haglund and Paulin.
We investigate various properties of the sublevel set $\{x : g(x) \leq 1\}$ and the integration of $h$ on this sublevel set when $g$ and $h$ are positively homogeneous functions. For instance, the latter integral reduces to integrating $h\exp(- g)$ on the whole space $\mathbb{R}^n$ (a non-Gaussian integral) and when $g$ is a polynomial, then the volume of the sublevel set is a convex function of its coefficients.
In fact, whenever $h$ is non-negative, the functional $\int \phi(g)h dx$ is a convex function of $g$ for a large class of functions $\phi:\mathbb{R}_{+} \to \mathbb{R}$. We also provide a numerical approximation scheme to compute the volume or integrate $h$ (or, equivalently, to approximate the associated non-Gaussian integral). We also show that finding the sublevel set $\{x : g(x) \leq 1\}$ of minimum volume that contains some given subset $K$ is a (hard) convex optimization problem for which we also propose two convergent numerical schemes. Finally, we provide a Gaussian-like property of non-Gaussian integrals for homogeneous polynomials that are sums of squares and critical points of a specific function.
Simultaneous Localisation and Mapping (SLAM) has become prominent in the field of robotics over the last decade, particularly in application to autonomous systems. SLAM enables any system equipped with exteroceptive (and often inertial) sensors to simultaneously update its own positional estimate and map of the environment by utilising information collected from the surroundings. The solution to the probabilistic SLAM problem can be derived using Bayes Theorem to yield estimates of the system state and covariance. In recursive form, the basic prediction-correction algorithm employs an Extended Kalman Filter (EKF) with Cholesky decomposition for numerical stability during inversion. This talk will present the mathematical formulation and solution of the SLAM problem, along with some algorithms used in implementation. We will then look at some applications of SLAM in the real world and discuss some of the challenges for future development.
In my opinion, the most significant unsolved problem in graph decompositions is the cycle double conjecture. This begins a series of talks on this conjecture in terms of background, relations to other problems and partial results.
This will be an introductory talk which begins by describing the four colour theorem and finite projective planes in the setting of graph decompositions. A problem posed by Ringel at a graph theory meeting in Oberwolfach in 1967 will then be discussed. This problem is now widely known as the Oberwolfach Problem, and is a generalisation of a question asked by Kirkman in 1850. It concerns decompositions of complete graphs into isomorphic copies of spanning regular graphs of degree two.
In this talk, we consider the structure of maximally monotone operators in Banach space whose domains have nonempty interior and we present new and explicit structure formulas for such operators. Along the way, we provide new proofs of the norm-to-weakstar closedness and property (Q) of these operators (recently established by Voisei). Various applications and limiting examples are given. This is the joint work with Jon Borwein.
Brian Alspach will continue with "The Anatomy of a Famous Conjecture" this Thursday. One can easily pick up the thread this week without having attended last week, but if you miss this week it will not be easy to join in next week.
I have embarked on a project of looking for Hamilton paths in Cayley graphs on finite Coxeter groups. This talk is a report on the progress thus far.
Exceptional Lie group $G_2$ is a beautiful 14-dimensional continuous group, having relations with such diverse notions as triality, 7-dimensional cross product and exceptional holonomy. It was found abstractly by Killing in 1887 (complex case) and then realized as a symmetry group by Engel and Cartan in 1894 (real split case). Later in 1910 Cartan returned to the topic and realized split $G_2$ as the maximal finite-dimensional symmetry algebra of a rank 2 distribution in $\mathbb{R}^5$. In other words, Cartan classified all symmetry groups of Monge equations of the form $y'=f(x,y,z,z',z'')$. I will discuss the higher-dimensional generalization of this fact, based on the joint work with Ian Anderson. Compact real form of $G_2$ was realized by Cartan as the automorphism group of octonions in 1914. In the talk I will also explain how to realize this $G_2$ as the maximal symmetry group of a geometric object.
12:00-1:00 | Michael Coons (University of Waterloo) |
1:00-2:00 | Claus Koestler (Aberystwyth University) |
2:00-3:00 | Eric Mortenson (The University of Queensland) |
3:00-4:00 | Ekaterina Shemyakova (University of Western Ontario) |
Brian Alspach will continue with "The Anatomy Of A Famous Conjecture" this Thursday. One can easily pick up the thread this week without having attended last week, but if you miss this week it will not be easy to join in next week.
In this talk, we consider a general convex feasibility problem in Hilbert space, and analyze a primal-dual pair of problems generated via a duality theory introduced by Svaiter. We present some algorithms and their convergence properties. The focus is a general primal-dual principle for strong convergence of some classes of algorithms. In particular, we give a different viewpoint for the weak-to-strong principle of Bauschke and Combettes. We also discuss how subgradient and proximal type methods fit in this primal-dual setting.
Joint work with Maicon Marques Alves (Universidade Federal de Santa Catarina-Brazil)
The talk will outline some topics associated with constructions for Hadamard matrices, in particular, a relatively simple construction, given by a sum of Kronecker products of ingredient matrices obeying certain conditions. Consideration of the structure of the ingredient matrices leads, on the one hand, to consideration of division algebras and Clifford algebras, and on the other hand, to searching for multisets of {-1,1} ingredient matrices. Structures within the sets of ingredient matrices can make searching more efficent.
We consider some fundamental generalized Mordell-Tornheim-Witten (MTW) zeta-function values along with their derivatives, and explore connections with multiple-zeta values (MZVs). To achieve these results, we make use of symbolic integration, high precision numerical integration, and some interesting combinatorics and special-function theory.
Our original motivation was to represent previously unresolved constructs such as Eulerian log-gamma integrals. Indeed, we are able to show that all such integrals belong to a vector space over an MTW basis, and we also present, for a substantial subset of this class, explicit closed-form expressions. In the process, we significantly extend methods for high-precision numerical computation of polylogarithms and their derivatives with respect to order. That said, the focus of our paper is the relation between MTW sums and classical polylogarithms. It is the adumbration of these relationships that makes the study significant.
The associated paper (with DH Bailey and RE Crandall) is at http://carmasite.newcastle.edu.au/jon/MTW1.pdf.
Approximation theory is a classical part of the analysis of functions defined on an Euclidean space or its subset and the foundation of its applications, while the problems related to high or infinite dimensions create known challenges even in the setting of Hilbert spaces. The stability (uniform continuity) of a mapping is one of the traditional properties investigated in various branches of pure and applied mathematics and further applications in engineering. Examples include analysis of linear and non-linear PDEs, (short-term) prediction problems and decision-making and data evolution.
We describe the uniform approximation properties of the uniformly continuous mappings between the pairs of Banach and, occasionally, metric spaces from various wide parameterised and non-parameterised classes of spaces with or without the local unconditional structure in a quantitative manner. The striking difference with the finite-dimensional setting is represented by the presence of Tsar'kov's phenomenon. Many tools in use are developed under the scope of our quasi-Euclidean approach. Its idea seems to be relatively natural in light of the compressed sensing and distortion phenomena.
We consider some fundamental generalized Mordell-Tornheim-Witten (MTW) zeta-function values along with their derivatives, and explore connections with multiple-zeta values (MZVs). To achieve these results, we make use of symbolic integration, high precision numerical integration, and some interesting combinatorics and special-function theory.
Our original motivation was to represent previously unresolved constructs such as Eulerian log-gamma integrals. Indeed, we are able to show that all such integrals belong to a vector space over an MTW basis, and we also present, for a substantial subset of this class, explicit closed-form expressions. In the process, we significantly extend methods for high-precision numerical computation of polylogarithms and their derivatives with respect to order. That said, the focus of our paper is the relation between MTW sums and classical polylogarithms. It is the adumbration of these relationships that makes the study significant.
The associated paper (with DH Bailey and RE Crandall) is at http://carmasite.newcastle.edu.au/jon/MTW1.pdf.
Brian Alspach will continue his discussion "The Anatomy Of A Famous Conjecture."
This talk will discuss opportunities and challenges related to the development and application of operations research techniques to transportation and logistics problems in non-profit settings. Much research has been conducted on transportation and logistics problems in commercial settings where the goal is either to maximize profit or to minimize cost. Significantly less work has been conducted for non-profit applications. In such settings, the objectives are often more difficult to quantify since issues such as equity and sustainability must be considered, yet efficient operations are still crucial. This talk will present several research projects that introduce new approaches tailored to the objectives and constraints unique to non-profit agencies, which are often concerned with obtaining equitable solutions given limited, and often uncertain, budgets, rather than with maximizing profits.
This talk will assess the potential of operations research to address the problems faced by non-profit agencies and attempt to understand why these problems have been understudied within the operations research community. To do so, we will ask the following questions: Are non-profit operations problems rich enough for academic study? and Are solutions to non-profit operations problems applicable to real communities?
Brian Alspach will continue his discussion "The Anatomy Of A Famous Conjecture."
This talk will survey some of the classical and recent results concerning operators composed of a projection onto a compact set in time, followed by a projection onto a compact set in frequency. Such "time- and band-limiting" operators were studied by Landau, Slepian, and Pollak in a series of papers published in the Bell Systems Tech. Journal in the early 1960s identifying the eigenfunctions, providing eigenvalue estimates, and describing spaces of "essentially time- and band-limited signals."
Further progress on time- and band-limiting has been intermittent, but genuine recent progress has been made in terms of numerical analysis, sampling theory, and extensions to multiband signals, all driven to some extent by potential applications in communications. After providing an outline of the historical developments in the mathematical theory of time- and bandlimiting, some details of the sampling theory and multiband setting will be given. Part of the latter represents joint work with Jeff Hogan and Scott Izu.
Brian Alspach will continue his discussion "The Anatomy Of A Famous Conjecture."
This involves (in pre-nonstandard analysis times) the development of a simple system of infinites and infinitesmals that help to clarify Cantor's Ternary Set, nonmeasurable sets and Lebesgue integration. The talk will include other memories as a maths student at Newcastle University College, Tighes Hill, from 1959 to 1961.
This week Brian Alspach concludes his series of talks entitled "The Anatomy Of A Famous Conjecture." We shall be in V27 - note room change.
A graph on v vertices is called pancyclic if it contains cycles of every length from 3 to v. Obviously such graphs exist — the complete graph on v vertices is an example. We shall look at the question, what is the minimum number of edges in a pancyclic graph? Interestingly, this question was "solved", incorrectly, in 1978. A complete solution is not yet known.
This week the speaker in the Discrete Mathematics Instructional Seminar is Judy-anne Osborn who will be discussing Hadamard matrices.
There is a high prevalence of tuberculosis (TB) in Papua New Guinea (PNG), which is exacerbated by the presence of drug-resistant TB strains and HIV infection. This is an important public health issue not only locally within PNG, but also in Australia due to the high cross-border traffic in the Torres Strait Island–Western Province (PNG) treaty region. A metapopulation model is used to evaluate the effect of varying control strategies in the region, and some initial cost-benefit analysis figures are presented.
The double zeta values are one natural way to generalise the Riemann zeta function at the positive integers; they are defined by $\zeta(a,b) = \sum_{n=1}^\infty \sum_{m=1}^{n-1} 1/n^a/m^b$. We give a unified and completely elementary method to prove several sum formulae for the double zeta values. We also discuss an experimental method for discovering such formulae.
Moreover, we use a reflection formula and recursions involving the Riemann zeta function to obtain new relations of closely related functions, such as the Witten zeta function, alternating double zeta values, and more generally, character sums.
TBA
This week the speaker in the Discrete Mathematics Instructional Seminar is Judy-anne Osborn who will be discussing Hadamard matrices.
I will give a brief introduction to the theory of self-similar groups, focusing on a couple of pertinent examples: Grigorchuk's group of intermediate growth, and the basilica group.
Based on generalized backward shift operators we introduce adaptive Fourier decomposition. Then we discuss its relations and applications to (i) system identification; (2) computation of Hilbert transform; (3) algorithm for the best order-n rational approximation to functions in the Hardy space H2; (4) forward and backward shift invariant spaces; (5) band preserving in filter designing; (6) phase retrieving; and (7) the Bedrosian identity. The talk also concerns possible generalizations of the theory and applications to higher dimensional spaces.
The Douglas-Rachford algorithm is an iterative method for finding a point in the intersection of two (or more) closed sets. It is well-known that the iteration (weakly) converges when it is applied to convex subsets of a Hilbert space. Despite the absence of a theoretical justification, the algorithm has also been successfully applied to various non-convex practical problems, including finding solutions for the eight queens problem, or sudoku puzzles. In particular, we will show how these two problems can be easily modelled.
With the aim providing some theoretical explanation of the convergence in the non-convex case, we have established a region of convergence for the prototypical non-convex Douglas-Rachford iteration which finds a point on the intersection of a line and a circle. Previous work was only able to establish local convergence, and was ineffective in that no explicit region of convergence could be given.
PS: Bring your hardest sudoku puzzle :)
A body moves in a rarefied medium of resting particles and at the same time very slowly rotates (somersaults). Each particle of the medium is reflected elastically when hitting the body boundary (multiple reflections are possible). The resulting resistance force acting on the body depends on the time; we are interested in minimizing the time-averaged value of resistance (which is called $R$). The value $R(B)$ is well defined in terms of billiard in the complement of $B$, for any bounded body $B \subset \mathbb{R}^d$, $d\geq 2$ with piecewise smooth boundary.
Let $C\subset\mathbb{R}^d$ be a bounded convex body and $C_1\subset C$ be another convex body with $\partial C_1 \cap C=\varnothing$. It would be interesting to get an estimate for $$R(C1_,C)= \inf_{C_1\subset B \subset C} R(B) .................. (1)$$ If $\partial C_1$ is close to $\partial C$, problem (1) can be referred to as minimizing the resistance of the convex body $C$ by "roughening" its surface. We cannot solve problem (1); however we can find the limit $$\lim_{\text{dist}(\partial C_1,\partial C)\rightarrow 0} \frac{R(C_1,C)}{R(C)}. .................. (2) $$
It will be explained that problem (2) can be solved by reduction to a special problem of optimal mass transportation, where the initial and final measurable spaces are complementary hemispheres, $X=\{x=(x_1,...,x_d)\in S^{d-1}: x_1\geq 0\}$ and $Y=\{x\in S^{d-1}:x_1\leq 0\}$. The transportation cost is the squared distance, $c(x,y)=\frac{1}{2}|x-y|^2$, and the measures in $X$ and $Y$ are obtained from the $(d-1)$-dimensional Lebesgue measure on the equatorial circle $\{x=(x_1,...,x_d):|x|\leq 1,x_1=0\}$ by parallel translation along the vector $e_1=(1,0,...,0)$. Let $C(\nu)$ be the total cost corresponding to the transport plan $\nu$ and let $\nu_0$ be the transport plan generated by parallel translation along $e_1$; then the value $\frac{\inf C(\nu)}{C(\nu_0)}$ coincides with the limit in (2).
Surprisingly, this limit does not depend on the body $C$ and depends only on the dimension $d$.
In particular, if $d=3$ ($d=2$), it equals (approximately) 0.96945 (0.98782). In other words, the resistance of a 3-dimensional (2-dimensional) convex body can be decreased by 3.05% (correspondingly, 1.22%) at most by roughening its surface.
Motivated by questions of algorithm analysis, we provide several distinct approaches to determining convergence and limit values for a class of linear iterations.
This is joint work with D. Borwein and B. Sims.
We consider the bipartite version of the degree/diameter problem; namely, find the maximum number Nb(d,D) of vertices in a bipartite graph of maximum degree d>2 and diameter D>2. The actual value of Nb(d,D) is still unknown for most (d,D) pairs.
The well-known Moore bound Mb(d,D) gives a general upper bound for Nb(d,D); graphs attaining this bound are called Moore (bipartite) graphs. Moore bipartite graphs are very scarce; they may only exist for D=3,4 or 6, but no other diameters. Interest has then shifted to investigate the existence or otherwise of graphs missing the Moore bound by a few vertices. A graph with order Mb(d,D)-e is called a graph of defect e.
It has been proved that bipartite graphs of defect 2 do not exist when D>3. In our paper we 'almost' prove that bipartite graphs of defect 4 cannot exist when D>4, thereby establishing a new upper bound on Nb(d,D) for more than 2/3 of all (d,D) combinations.
We present a nonconvex bundle technique where function and subgradient values are available only up to an error tolerance which remains unknown to the user. The challenge is to develop an algorithm which converges to an approximate solution which, despite the lack of information, is as good as one can hope for. For instance, if data are known up to the error $O(\epsilon)$, the solution should also be accurate up to $O(\epsilon)$. We show that the oracle of downshifted tangents is an excellent tool to deal with this difficult situation.
Dr Koerber will speak about the experience of using MapleTA extensively in undergraduate teaching at the University of Adelaide, and demonstrate how they have been using the system there. Bio: Adrian Koerber is Director of First Year Studies in Mathematics at the University of Adelaide. His mathematical research is in the area of modelling gene networks.
We consider the bipartite version of the degree/diameter problem; namely, find the maximum number Nb(d,D) of vertices in a bipartite graph of maximum degree d>2 and diameter D>2. The actual value of Nb(d,D) is still unknown for most (d,D) pairs.
The well-known Moore bound Mb(d,D) gives a general upper bound for Nb(d,D); graphs attaining this bound are called Moore (bipartite) graphs. Moore bipartite graphs are very scarce; they may only exist for D=3,4 or 6, but no other diameters. Interest has then shifted to investigate the existence or otherwise of graphs missing the Moore bound by a few vertices. A graph with order Mb(d,D)-e is called a graph of defect e.
It has been proved that bipartite graphs of defect 2 do not exist when D>3. In our paper we 'almost' prove that bipartite graphs of defect 4 cannot exist when D>4, thereby establishing a new upper bound on Nb(d,D) for more than 2/3 of all (d,D) combinations.
Snarks are 3-regular graphs that are not 3-edge-colourable and are cyclically 4-edge-connected. They exist but are hard to find. On the other hand, it is believed that Cayley graphs can never be snarks. The latter is the subject of the next series of talks.
Hajek proved that a WUR Banach space is an Asplund space. This result suggests that the WUR property might have interesting consequences as a dual property. We show that
(i) every Banach Space with separable second dual can be equivalently renormed to have WUR dual,
(ii) under certain embedding conditions a Banach space with WUR dual is reflexive.
Snarks are 3-regular graphs that are not 3-edge-colourable and are cyclically 4-edge-connected. They exist but are hard to find. On the other hand, it is believed that Cayley graphs can never be snarks. The latter is the subject of the next series of talks.
Snarks are 3-regular graphs that are not 3-edge-colourable and are cyclically 4-edge-connected. They exist but are hard to find. On the other hand, it is believed that Cayley graphs can never be snarks. The latter is the subject of the next series of talks.
Let $F(z)$ be a power series, say with integer coefficients. In the late 1920s and early 1930s, Kurt Mahler discovered that for $F(z)$ satisfying a certain type of functional equation (now called Mahler functions), the transcendence of the function $F(z)$ could be used to prove the transcendence of certain special values of $F(z)$. Mahler's main application at the time was to prove the transcendence of the Thue-Morse number $\sum_{n\geq 0}t(n)/2^n$ where $t(n)$ is either 0 or 1 depending on the parity of the number of 1s in the base 2 expansion of $n$. In this talk, I will talk about some of the connections between Mahler functions and finite automata and highlight some recent approaches to large problems in the area. If time permits, I will outline a new proof of a version of Carlson's theorem for Mahler functions; that is, a Mahler function is either rational or it has the unit circle as a natural boundary.
(Joint speakers, Jon Borwein and Michael Rose)
p>Using fractal self-similarity and functional-expectation relations, the classical theory of box integrals is extended to encompass a new class of fractal “string-generated Cantor sets” (SCSs) embedded in unit hypercubes of arbitrary dimension. Motivated by laboratory studies on the distribution of brain synapses, these SCSs were designed for dimensional freedom: a suitable choice of generating string allows for fine-tuning the fractal dimension of the corresponding set. We also establish closed forms for certain statistical moments on SCSs and report various numerical results. The associated paper is at http://www.carma.newcastle.edu.au/jon/papers.html#PAPERS.Burnside's Theorem characterising transitive permutation groups of prime degree has some wonderful applications for graphs. This week we start an exploration of this topic.
We are holding an afternoon mini-conference, in conjunction with the School of Mathematical and Physical Sciences.
If you are engaged in any of the many Outreach Activities in the Mathematical Sciences that people from CARMA, our School and beyond contribute to, for example visiting primary or secondary schools, presenting to schools who visit us, public lectures, media interviews, helping run maths competitions, etc etc, and would like to share what you're doing, please let us know. Also if you're not currently engaged in an outreach activity but have an idea that you would like to try, and want to use a talk about your idea as a "sounding board", please feel free to do so.
There will be some very short talks: 5 minutes, and some longer talks: 20 minutes, with time for discussion in between. We'll be serving afternoon tea throughout the afternoon; and will have an open discussion forum near the end of the day. If you're interested in giving a talk please contact Judy-anne.Osborn@newcastle.edu.au, indicating whether you'd prefer a 5-minute or a 20-minute slot. If you're simply interested in attending, please let us know as well for catering purposes. The event will be held in one of the function rooms in the Shortland building.
12:05 — Begin, with welcome and lunch
15:45 — Last talk finishes
15:45-16:15 — Open discussion
George is going to start giving some talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, and over several weeks will look at the structure theorems and scale calculations for these examples.
We shall continue exploring implications of Burnside's Theorem for vertex-transitive graphs.
Groundwater makes up nearly 30% of the entire world’s freshwater but the mathematical models for the better understanding of the system are difficult to validate due to the disordered nature of the porous media and the complex geometry of the channels of flow. In this seminar, after establishing the statistical macroscopic equivalent of the Navier-Stokes equations for the groundwater hydrodynamic and its consequences in term of Laplace and diffusion equations, some cases will be solved in term of special functions by using the modern Computer Algebra System.
Variational methods have been used to derive symmetric solutions for many problems related to real world applications. To name a few we mention periodic solutions to ODEs related to N-body problems and electrical circuits, symmetric solutions to PDEs, and symmetry in derivatives of spectral functions. In this talk we examine the commonalities of using variational methods in the presence of symmetry.
This is an ongoing collaborative research project with Jon Borwein. So far our questions still outnumber our answers.
George is going to continue his series of talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, and over several weeks look at the structure theorems and scale calculations for these examples.
We shall continue exploring implications of Burnside's Theorem for vertex-transitive graphs.
Given a positive integer b, we say that a mathematical constant alpha is "b-normal" or "normal base b" if every m-long string of digits appears in the base-b expansion of alpha with precisely the limiting frequency 1/b^m. Although it is well known from measure theory that almost all real numbers are b-normal for all integers b > 1, nonetheless proving normality (or nonnormality) for specific constants, such as pi, e and log(2), has been very difficult.
In the 21st century, a number of different approaches have been attempted on this problem. For example, a recent study employed a Poisson model of normality to conclude that based on the first four trillion hexadecimal digits of pi, it is exceedingly unlikely that pi is not normal. In a similar vein, graphical techniques, in most cases based on the digit-generated "random" walks, have been successfully employed to detect certain nonnormality in some cases.
On the analytical front, it was shown in 2001 that the normality of certain reals, including log(2) and pi (or any other constant given by a BBP formula), could be reduced to a question about the behavior of certain specific pseudorandom number generators. Subsequently normality was established for an uncountable class of reals (the "Stoneham numbers"), the simplest of which is: alpha_{2,3} = Sum_{n >= 0} 1/(3^n 2^(3^n)), which is provably normal base 2. Just as intriguing is a recent result that alpha_{2,3}, for instance, is provably NOT normal base 6. These results have now been generalized to some extent, although many open cases remain.
In this talk I will present an introduction to the theory of normal numbers, including brief mention of new graphical- and statistical-based techniques. I will then sketch a proof of the normality base 2 (and nonnormality base 6) of Stoneham numbers, then suggest some additional lines of research. Various parts of this research were conducted in collaboration with Richard Crandall, Jonathan and Peter Borwein, Francisco Aragon, Cristian Calude, Michael Dinneen, Monica Dumitrescu and Alex Yee.
A frequent theme of 21st century experimental math is the computer discovery of identities, typically done by means of computing some mathematical entity (a sum, limit, integral, etc) to very high numeric precision, then using the PSLQ algorithm to identify the entity in terms of well known constants.
Perhaps the most successful application of this methodology has been to identify integrals arising in mathematical physics. This talk will present numerous examples of this type, including integrals from quantum field theory, Ising theory, random walks, 3D lattice problems, and even mouse brains. In some cases, it is necessary to compute these integrals to 3000-digit precision, and developing techniques to do such computations is a daunting technical challenge.
George continues his series of talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, structure theorems and scale calculations for these examples.
This week Brian Alspach will complete the discussion on Burnside's Theorem and vertex-transitive graphs of prime order.
Recently the Alternating Projection Algorithm was extended into CAT(0) spaces. We will look at this and also current work on extending the Douglas Rachford Algorithm into CAT(0) spaces. By using CAT(0) spaces the underlying linear structure of the space is dispensable and this allows certain algorithms to be extended to spaces such as classical hyperbolic spaces, simply connected Riemannian manifolds of non-positive curvature, R-trees and Euclidean buildings.
In this talk, we study the properties of integral functionals induced on $L_\text{E}^1(S,\mu)$ by closed convex functions on a Euclidean space E. We give sufficient conditions for such integral functions to be strongly rotund (well-posed). We show that in this generality functions such as the Boltzmann-Shannon entropy and the Fermi-Dirac entropy are strongly rotund. We also study convergence in measure and give various limiting counter-example.
This is joint work with Jon Borwein.
George continues his series of talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, structure theorems and scale calculations for these examples.
I will discuss four much abused words Interdisciplinarity, Innovation, Collaboration and Creativity. I will describe what they mean for different stakeholder groups and will speak about my own experiences as a research scientist, as a scientific administrator, as an educator and even as a small high-tech businessman. I will also offer advice that can of course be ignored.
Linear Water Wave theory is one of the most important branches on fluid mechanics. Practically it underpins most of the engineering design of ships, offshore structures, etc. It also has a very rich history in the development of applied mathematics. In this talk I will focus on the connection between solutions in the frequency and time-domains and show how we can use various formulations to make numerical calculations and to construct approximate solutions. I will illustrate these methods with application to some simple wave scattering problems.
We consider the problem of characterising embeddings of an abstract group into totally disconnected locally compact (tdlc) groups. Specifically, for each pair of nonzero integers $m,n$ we construct a tdlc group containing the Baumslag-Solitar group $BS(m,n)$ as a dense subgroup, and compute the scales of elements and flat rank of the tdlc group.
This is joint work with George Willis.
In this talk, we study the properties of integral functionals induced on the Banach space of integrable functions by closed convex functions on a Euclidean space.
We give sufficient conditions for such integral functions to be strongly rotund (well-posed). We show that in this generality functions such as the Boltzmann-Shannon entropy and the Fermi-Dirac entropy are strongly rotund. We also study convergence in measure and give various limiting counter-example.
In this talk projection algorithms for solving (nonconvex) feasibility problems in Euclidian spaces are considered. Of special interest are the Method of Alternating Projections (MAP) and the Averaged Alternating Reflection Algorithm (AAR) which cover some of the state of the art algorithms for our intended application, the phase retrieval problem. In the case of convex feasibility, firm nonexpansiveness of projection mappings is a global property that yields global convergence of MAP, and, for consistent problems, AAR. Based on epsilon-delta-regularity of sets (Bauschke, Luke, Phan, Wang 2012) a relaxed local version of firm nonexpansiveness with respect to the intersection is introduced for consistent feasibility problems. This combined with a type of coercivity condition, which relates to the regularity of the intersection, yields local linear convergence of MAP for a wide class of nonconvex problems, and even local linear convergence of AAR in more limited nonconvex settings.
If some arithmetical sums are small then the complex zeroes of the zeta-function are linearly dependent. Since we don't believe the conclusion we ought not to believe the premise. I will show that the zeroes are 'almost linearly independent' which implies, in particular, that the Mertens conjecture fails more drastically than was previously known.
In this talk, we will show that a D-finite Mahler function is necessarily rational. This gives a new proof of the rational-transcendental dichotomy of Mahler functions due to Nishioka. Using our method of proof, we also provide a new proof of a Pólya-Carlson type result for Mahler functions due to Randé; that is, a Mahler function which is meromorphic in the unit disk is either rational or has the unit circle as a natural boundary. This is joint work with Jason Bell and Eric Rowland.
In 1966 Gallai conjectured that a connected graph of order n can be decomposed into n/2 or fewer paths when n is even, or (n+1)/2 or fewer paths when n is odd. We shall discuss old and new work on this as yet unsolved conjecture.
Motivated by the desire to visualise large mathematical data sets, especially in number theory, we offer various tools for representing floating point numbers as planar walks and for quantitatively measuring their “randomness”.
What to expect: some interesting ideas, many beautiful pictures (including a 108-gigapixel picture of π), and some easy-to-understand maths.
What you won’t get: too many equations, difficult proofs, or any “real walking”.
This is a joint work with David Bailey, Jon Borwein and Peter Borwein.
Many cognitive models derive their predictions through simulation. This means that it is difficult or impossible to write down a probability distribution or likelihood that characterizes the random behavior of the data as a function of the model's parameters. In turn, the lack of a likelihood means that standard Bayesian analyses of such models are impossible. In this presentation we demonstrate a procedure called approximate Bayesian computation (ABC), a method for Bayesian analysis that circumvents the evaluation of the likelihood. Although they have shown great promise for likelihood-free inference, current ABC methods suffer from two problems that have largely revented their mainstream adoption: long computation time and an inability to scale beyond models with few parameters. We introduce a new ABC algorithm, called ABCDE, that includes differential evolution as a computationally efficient genetic algorithm for proposal generation. ABCDE is able to obtain accurate posterior estimates an order of magnitude faster than a popular rejection-based method and scale to high-dimensional parameter spaces that have proven difficult for the current rejection-based ABC methods. To illustrate its utility we apply ABCDE to several well-established simulation-based models of memory and decision-making that have never been fit in a Bayesian framework.
AUTHORS: Brandon M. Turner (Stanford University) Per B. Sederberg (The Ohio State University)
In 1966 Gallai conjectured that a connected graph of order n can be decomposed into n/2 or fewer paths when n is even, or (n+1)/2 or fewer paths when n is odd. We shall discuss old and new work on this as yet unsolved conjecture.
We discuss how the title is related to π.
I will give an extended version of my talk at the AustMS meeting about some ongoing work with Pierre-Emmanuel Caprace and George Willis.
Given a locally compact topological group G, the connected component of the identity is a closed normal subgroup G_0 and the quotient group is totally disconnected. Connected locally compact groups can be approximated by Lie groups, and as such are relatively well-understood. By contrast, totally disconnected locally compact (t.d.l.c.) groups are a more difficult class of objects to understand. Unlike in the connected case, it is probably hopeless to classify the simple t.d.l.c. groups, because this would include for instance all simple groups (equipped with the discrete topology). Even classifying the finitely generated simple groups is widely regarded as impossible. However, we can prove some general results about broad classes of (topologically) simple t.d.l.c. groups that have a compact generating set.
Given a non-discrete t.d.l.c. group, there is always an open compact subgroup. Compact totally disconnected groups are residually finite, so have many normal subgroups. Our approach is to analyse a t.d.l.c. group G (which may itself be simple) via normal subgroups of open compact subgroups. From these we obtain lattices and Cantor sets on which G acts, and we can use properties of these actions to demonstrate properties of G. For instance, we have made some progress on the question of whether a compactly generated topologically simple t.d.l.c. group is abstractly simple, and found some necessary conditions for G to be amenable.
We study the problem of finding an interpolating curve passing through prescribed points in the Euclidean space. The interpolating curve minimizes the pointwise maximum length, i.e., L∞-norm, of its acceleration. We re-formulate the problem as an optimal control problem and employ simple but effective tools of optimal control theory. We characterize solutions associated with singular (of infinite order) and nonsingular controls. We reduce the infinite dimensional interpolation problem to an ensuing finite dimensional one and derive closed form expressions for interpolating curves. Consequently we devise numerical techniques for finding interpolating curves and illustrate these techniques on examples.
Infecting aedes aegypti with Wolbachia has been proposed as an alternative in reducing dengue transmission. If Wolbachia-infected mosquitoes can invade and dominate the population of aedes aegypti mosquitoes, they can reduce dengue transmission. Cytoplasmic Incompatibility (CI) provides the reproductive advantage for Wolbachia-infected mosquitoes with which they can reproduce more and dominate the population. A mosquito population model is developed in order to determine the survival of Wolbachia-infected mosquiotes when they are released into the wild. The model has two physically stable realistic steady states. The model reveals that once the Wolbachia-infected mosquitoes survive, they ultimately dominate the population.
Giuga's conjecture will be introduced, and we will discuss what's changed in the computation of a counterexample in the last 17 years.
Automata groups are a class of groups generated by recursively defined automorphisms of a regular rooted tree. Associated to each automata group is an object known as the self-similarity graph. Nekrashevych showed that in the case where the group satisfies a natural condition known as contracting, the self-similarity graph is Gromov-hyperbolic and has boundary homeomorphic to the limit space of the group action. I will talk about self-similarity graphs of automata groups that do not satisfy the contracting condition.
In this talk, we present our ongoing efforts in solving a number of continuous facility location problems that involve sets using recently developed tools of variational analysis and generalized differentiation. Subgradients of a class of nonsmooth functions called minimal time functions are developed and employed to study these problems. Our approach advances the applications of variational analysis and optimization to a well-developed field of facility location, while shedding new light on well-known classical geometry problems such as the Fermat-Torricelli problem, the Sylvester smallest enclosing circle problem, and the problem of Apollonius.
I will discuss a new algorithm for counting points on hyperelliptic curves over finite fields.
This talk is an introduction to symbolic convex analysis.
Multi-linear functions appear in many global optimization problems, including reformulated quadratic and polynomial optimization problems. There is a extended formulation for the convex hull of the graph of a multi-linear function that requires the use of an exponential number of variables. Relying on this result, we study an approach that generates relaxations for multiple terms simultaneously, as opposed to methods that relax the nonconvexity of each term individually. In some special cases, we are able to establish analytic bounds on the ratio of the strength of the term-by-term and convex hull relaxations. To our knowledge, these are the first approximation-ratio results for the strength of relaxations of global optimization problems. The results lend insight into the design of practical (non-exponentially sized) relaxations. Computations demonstrate that the bounds obtained in this manner are competitive with the well-known semi-definite programming based bounds for these problems.
Joint work with Jim Luedtke, University of Wisconsin-Madison, and Mahdi Namazifar, now with Opera Solutions.
Nonexpansive operators in Banach spaces are of utmost importance in Nonlinear Analysis and Optimization Theory. We are concerned in this talk with classes of operators which are, in some sense, nonexpansive not with respect to the norm, but with respect to Bregman distances. Since these distances are not symmetric in general, it seems natural to distinguish between left and right Bregman nonexpansive operators. Some left classes have already been studied quite intensively, so this talk is mainly devoted to right Bregman nonexpansive operators and the relationship between both classes.
This talk is based on joint works with Prof. Simeon Reich and Shoham Sabach from Technion-Israel Institute of Technology, Haifa.
This is the second part of the informal seminar on an introduction to symbolic convex analysis. The published paper on which this seminar is mainly based on can be found at http://www.carma.newcastle.edu.au/jon/fenchel.pdf.
Parameterised approximation is a relatively new but growing field of interest. It merges two ways of dealing with NP-hard optimisation problems, namely polynomial approximation and exact parameterised (exponential-time) algorithms.
We explore opportunities for parameterising constant factor approximation algorithms for vertex cover, and we provide a simple algorithm that works on any approximation ratio of the form $\frac{2l+1}{l+1}$, $l=1,2,\dots$, and has complexity that outperforms previously published algorithms by Bourgeois et al. based on sophisticated exact parameterised algorithms. In particular, for $l=1$ (factor-$1.5$ approximation) our algorithm runs in time $\text{O}^*(\text{simpleonefiveapproxbase}^k)$, where parameter $k \leq \frac{2}{3}\tau$, and $\tau$ is the size of a minimum vertex cover.
Additionally, we present an improved polynomial-time approximation algorithm for graphs of average degree at most four and a limited number of vertices with degree less than two.
Motivated by laboratory studies on the distribution of brain synapses, the classical theory of box integrals - being expectations on unit hypercubes - is extended to a new class of fractal "string-generated Cantor sets" that facilitate fine-tuning of their fractal dimension through a suitable choice of generating string. Closed forms for certain statistical moments on these fractal sets will be presented, together with a precision algorithm for higher embedding dimensions. This is based on joint work with Laur. Prof. Jon Borwein, Prof. David Bailey and Dr. Richard Crandall.
Many problems in diverse areas of mathematics and modern physical sciences can be formulated as a Convex Feasibility Problem, consisting of finding a point in the intersection of finitely many closed convex sets. Two other related problems are the Split Feasibility Problem and the Multiple-Sets Split Feasibility Problem, both very useful when solving inverse problems where constraints are imposed in the domain as well as in the range of a linear operator. We present some recent contributions concerning these problems in the setting of Hilbert spaces along with some numerical experiments to illustrate the implementation of some iterative methods in signal processing.
Automaton semigroups are a natural generalisation of the automaton groups introduced by Grigorchuk and others in the 1980s as examples of groups having various 'exotic' properties. In this talk I will give a brief introduction to automaton semigroups, and then discuss recent joint work with Alan Cain on the extent to which the class of automaton semigroups is closed under certain semigroup constructions (free products and wreath products).
Fundamental questions in basic and applied ecology alike involve complex adaptive systems, in which localized interactions among individual agents give rise to emergent patterns that feed back to affect individual behavior. In such systems, a central challenge is to scale from the "microscopic" to the "macroscopic", in order to understand the emergence of collective phenomena, the potential for critical transitions, and the ecological and evolutionary conflicts between levels of organization. This lecture will explore some specific examples, from universality in bacterial pattern formation to collective motion and collective decision-making in animal groups. It also will suggest that studies of emergence, scaling and critical transitions in physical systems can inform the analysis of similar phenomena in ecological systems, while raising new challenges for theory.
Professor Levin received his B.A. from Johns Hopkins University and his Ph.D. in mathematics from the University of Maryland. At Cornell University 1965-1992 , he was Chair of the Section of Ecology and Systematics, and then Director of the Ecosystems Research Center, the Center for Environmental Research and the Program on Theoretical and Computational Biology, as well as Charles A. Alexander Professor of Biological Sciences (1985-1992). Since 1992, he has been at Princeton University, where he is currently George M. Moffett Professor of Biology and Director of the Center for BioComplexity. He retains an Adjunct Professorship at Cornell.
His research interests are in understanding how macroscopic patterns and processes are maintained at the level of ecosystems and the biosphere, in terms of ecological and evolutionary mechanisms that operate primarily at the level of organisms; in infectious diseases; and in the interface between basic and applied ecology.
Simon Levin visits Australia for the first in the Maths of Planet Earth Simons Public Lecture Series. http://mathsofplanetearth.org.au/events/simons/
Let $s_q(n)$ be the sum of the $q$-ary digits of $n$. For example $s_{10}(1729) = 1 + 7 + 2 + 9 = 19$. It is known what $s_q(n)$ looks like "on average". It can be shown that $s_q(n^h)$ looks $h$ times bigger "on average". This raises the question: is the ratio of these two things $h$ on average? In this talk we will give some history on the sum of digits function, and will give a proof of one of Stolarsky's conjecture concerning the minimal values of the ratio of $s_q(n)$ and $s_q(n^h)$.
Three ideas --- active sets, steepest descent, and smooth approximations of functions --- permeate nonsmooth optimization. I will give a fresh perspective on these concepts, and illustrate how many results in these areas can be strengthened in the semi-algebraic setting. This is joint work with A.D. Ioffe (Technion), A.S. Lewis (Cornell), and M. Larsson (EPFL).
After Gromov's work in the 1980s, the modern approach to studying infinite groups is from the geometric point of view, seeing them as metric spaces and using geometric concepts. One of these is the concept of distortion of a subgroup in a group. Here we will give the definition and some examples of distorted and nondistorted subgroups and some recent results on them. The main tools used to establish these results are quasi-metrics or metric estimates, which are quantities which differ from the distance by a multiplicative constant, but which still capture the concept enough to understand distortion.
The joint spectral radius of a finite set of real $d \times d$ matrices is defined to be the maximum possible exponential rate of growth of long products of matrices drawn from that set. A set of matrices is said to have the finiteness property if there exists a periodic product which achieves this maximal rate of growth. J. C. Lagarias and Y. Wang conjectured in 1995 that every finite set of real $d \times d$ matrices satisfies the finiteness property. However, T. Bousch and J. Mairesse proved in 2002 that counterexamples to the finiteness conjecture exist, showing in particular that there exists a family of pairs of $2 \times 2$ matrices which contains a counterexample. Similar results were subsequently given by V. D. Blondel, J. Theys and A. A. Vladimirov and by V. S. Kozyakin, but no explicit counterexample to the finiteness conjecture was given. This talk will discuss an explicit counter-example to this conjecture.
In 1997, Kaneko introduced the poly-Bernoulli number. Poly-Euler numbers are introduced as a generalization of the Euler numbers in a manner similar to the introduction of the poly-Bernoulli numbers. In my talk, some properties of poly-Euler numbers, for example, explicit formulas, sign change, Clausen-von Staudt type formula, combinatorial interpretations and so on are showed.
This research is a joint work with Yasuo Ohno.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
We prove that if $q\ne0,\pm1$ and $\ell\ge1$ are fixed integers, then the numbers $$ 1, \quad \sum_{n=1}^\infty\frac{1}{q^n-1}, \quad \sum_{n=1}^\infty\frac{1}{q^{n^2}-1}, \quad \dots, \quad \sum_{n=1}^\infty\frac{1}{q^{n^\ell}-1} $$ are linearly independent over $\mathbb{Q}$. This generalizes a result of Erdős who treated the case $\ell=1$. The method is based on the original approaches of Chowla and Erdős, together with some results about primes in arithmetic progressions with large moduli of Ahlford, Granville and Pomerance.
This is joint work with Yohei Tachiya.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
The desire to understand $\pi$, the challenge, and originally the need, to calculate ever more accurate values of $\pi$, the ratio of the circumference of a circle to its diameter, has captured mathematicians - great and less great - for many many centuries. And, especially recently, $\pi$ has provided compelling examples of computational mathematics. $\pi$, uniquely in mathematics, is pervasive in popular culture and the popular imagination. In this lecture I shall intersperse a largely chronological account of $\pi$'s mathematical and numerical status with examples of its ubiquity. It is truly a number for Planet Earth.
I am grateful to have been appointed in a role with a particular focus on First Year Teaching as well as a research mandate. The prospect of trying to do both well is daunting but exciting. I have begun talking with some of my colleagues who are in somewhat similar roles in other Universities in Australia and overseas about what they do. I would like to share what I've learnt, as well as some of my thoughts so far about how this new role might evolve. I am also very interested in input from the Maths discipline or indeed any of my colleagues as to what you think is important and how this role can benefit the maths discipline and our school.
Reaction-diffusion processes occur in many materials with microstructure such as biological cells, steel or concrete. The main difficulty in modelling and simulating accurately such processes is to account for the fine microstructure of the material. One method of upscaling multi-scale problems, which has proven reliable for obtaining feasible macroscopic models, is the method of periodic homogenisation.
The talk will give an introduction to multi-scale modelling of chemical mechanisms in domains with microstructure as well as to the method of periodic homogenisation. Moreover, a few aspects of solving the resulting systems of equations numerically will also be discussed.
I will survey what is known and some of the open questions.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
I will survey what is known and some of the open questions.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
We discuss some recently discovered relations between L-values of modular forms and integrals involving the complete elliptic integral K. Gentle and illustrative examples will be given. Such relations also lead to closed forms of previously intractable integrals and (chemical) lattice sums.
I will survey what is known and some of the open questions.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
I will survey what is known and some of the open questions.
Modern mathematics suffers from subtle but serious logical problems connected with the widespread use of infinite sets and the non-computational aspects of real numbers. The result is an ever-widening gap between the theories of pure mathematics and the computations available to computer scientists.
In this talk we discuss a new approach to mathematics that aims to remove many of the logical difficulties by returning our focus to the all important aspect of the rational numbers and polynomial arithmetic. The key is rational trigonometry, which shows how to rethink the fundamentals of trigonometry and metrical geometry in a purely algebraic way, opens the door to more general non-Euclidean geometries, and has numerous concrete applications for computer scientists interested in graphics and robotics.
The classical prolate spheroidal wavefunctions (prolates) arise when solving the Helmholtz equation by separation of variables in prolate spheroidal coordinates. They interpolate between Legendre polynomials and Hermite functions. In a beautiful series of papers published in the Bell Labs Technical Journal in the 1960's, they were rediscovered by Landau, Slepian and Pollak in connection with the spectral concentration problem. After years spent out of the limelight while wavelets drew the focus of mathematicians, physicists and electrical engineers, the popularity of the prolates has recently surged through their appearance in certain communication technologies. In this talk we outline some developments in the sampling theory of bandlimited signals that employ the prolates, and the construction of bandpass prolate functions.
This is joint work with Joe Lakey (New Mexico State University)
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
We introduce and study a new dual condition which characterizes zero duality gap in nonsmooth convex optimization. We prove that our condition is weaker than all existing constraint qualifications, including the closed epigraph condition. Our dual condition was inspired by, and is weaker than, the so-called Bertsekas’ condition for monotropic programming problems. We give several corollaries of our result and special cases as applications. We pay special attention to the polyhedral and sublinear cases, and their implications in convex optimization.
This research is a joint work with Jonathan M. Borwein and Liangjin Yao.
Vulnerability is the resistance of a network after any disruptions in its links or nodes. Since any network can be modelled by a graph, many vulnerability measures were defined to observe the resistance of networks. For this purpose vulnerability measures such as connectivity,integrity, toughness etc., have been studied widely over all vertices of a graph. In recent many researches began to study on vulnerability measures on graphs over vertices or edges which have a special property rather than over all vertices of the graph.
Independent domination, connected domination and total domination measures are examples of such these measures. Total Accessibility number of a graph is defined as a new measure by choosing the accessible sets $S \subset V$ which have a special property accesibility. Total Accessibility number of a graph G is based on the accessibility number of a graph. The subsets S are accessible sets of the graph. Accessibility number of any connected graph G is a concept based on neighborhood relation between any two vertices by using another vertex connected to both these two vertices.
Graph automatic groups are an extension of the notion of an automatic group, introduced by Kharlampovich, Khoussainov and Miasnikov in 2011, with the intention to capture a wider class of groups while preserving computational properties such as having quadratic time word problem. We extend the notion further by replacing regular with more general language classes. We prove that nonsolvable Baumslag-Solitar groups are (context free)-graph automatic, (context sensitive)-graph automatic implies a context-sensitive word problem and conversely groups with context sensitive word problem are (context sensitive)-automatic. Finally an obstruction to (context sensitive)-graph automatic implying polynomial time word problem is given.
This is joint work with Jennifer Taback, Bowdoin College.
Spatial patterns of events that occur on a network of lines, such as traffic accidents recorded on a street network, present many challenges to a statistician. How do we know whether a particular stretch of road is a "black spot", with a higher-than-average risk of accidents? How do we know which aspects of road design affect accident risk? These important questions cannot be answered satisfactorily using current techniques for spatial analysis. The core problem is that we need to take account of the geometry of the road network. Standard methods for spatial analysis assume that "space" is homogeneous; they are inappropriate for point patterns on a linear network, and give fallacious results. To make progress, we must abandon some of the most cherished assumptions of spatial statistics, with far-reaching implications for statistical methodology.
The talk will describe the first few steps towards a new methodology for analysing point patterns on a linear network. Ingredients include stochastic processes, discrete graph theory and classical partial differential equations as well as statistical methodology. Examples come from ecology, criminology and neuroscience.
In this talk we introduce a Douglas-Rachford inspired projection algorithm, the cyclic Douglas-Rachford iteration scheme. We show, unlike the classical Douglas-Rachford scheme, that the method can be applied directly to convex feasibility problems in Hilbert space without recourse to a product space formulation. Initial results, from numerical experiments comparing our methods to the classical Douglas-Rachford scheme, are promising.
This is joint work with Prof. Jonathan Borwein.
We will discuss the substantial mathematical, computational, historical and philosophical aspects of this celebrated and controversial theorem. Much of this talk should be accessible to undergraduates, but we will also discuss some of the crucial details of the actual revision by Robertson, Sanders, Seymour and Thomas of the original Appel-Haken computer proof. We will additionally cover recent new computer proofs by Gonthier, and by Steinberger, and also the generalisations of the theorem by Hajos and Hadwiger which are currently still open. New software developed by the speaker will be used to visually illustrate many of the subtle points involved, and we will examine the air of controversy that still surrounds existing computer proofs. Finally, the prospect of a human proof will be canvased.
ABOUT THE SPEAKER: Mr Michael Reynolds has a Masters degree in Maths and an extensive experience in Software Industry. He is currently doing his PhD in Graph Theory at the University of Newcastle.
In response to a recent report from Australia's Chief Scientist (Prof Ian Chubb), the Australian government recently sought applications from consortia of universities (and other interested parties) interested in developing pre-service programs that will improve the quality of mathematics and science school teachers. In particular, the programs should:
At UoN, a group of us from Education and MAPS produced the outline of a vision for our own BTeach/BMath program which builds on local strengths. In the context of very tight timelines, this became a part of an application together with five other universities. In this seminar we will outline the vision that we produced, and invite further contributions and participation, with a view to improving the BMath/BTeach program regardless of the outcome of the application of which we are a part.
We continue on the Probabilistic Method, looking at Chapter 4 of Alon and Spencer. We will consider the second moment, the Chebyshev's inequality, Markov's inequality and Chernoff's inequality.
Our most recent computations tell us that any counterexample to Giuga’s 1950 primality conjecture must have at least 19,907 digits. Equivalently, any number which is both a Giuga and a Carmichael number must have at least 19,907 digits. This bound has not been achieved through exhaustive testing of all numbers with up to 19,907 digits, but rather through exploitation of the properties of Giuga and Carmichael numbers. We introduce the conjecture and an algorithm for finding lower bounds to a counterexample, then present our recent results and discuss challenges to further computation.
Network infrastructures are a common phenomenon. Network upgrades and expansions typically occur over time due to budget constraints. We introduce a class of incremental network design problems that allow investigation of many of the key issues related to the choice and timing of infrastructure expansions and their impact on the costs of the activities performed on that infrastructure. We examine three variants: incremental network design with shortest paths, incremental network design with maximum flows, and incremental design with minimum spanning trees. We investigate their computational complexity, we analyse the performance of natural heuristics, we derive approximation algorithms and we study integer program formulations.
Degree/diameter problem in graph theory is a theoretical problem which has applications in network design. The problem is to find the maximum possible number of nodes in a network with the limitations on the number of links attached to any node and also the limitation on the largest number of links that should be traversed when a message is sent from one node to another inside the network. An upper bound, known as the Moore bound, is given to this problem. The graphs that obtain the bound are called Moore graphs.
In this talk we give an overview of the existing Moore graphs and we discuss the existence of a Moore graph of degree 57 with diameter 2 which has been an open problem for more than 50 years.
In this talk, we study the rate of convergence of the cyclic projection algorithm applied to finitely many semi-algebraic convex sets. We establish an explicit convergence rate estimate which relies on the maximum degree of the polynomials that generate the semi-algebraic convex sets and the dimension of the underlying space. We achieve our results by exploiting the algebraic structure of the semi-algebraic convex sets.
This is the joint work with Jon Borwein and Guoyin Li.
W. T. Tutte published a paper in 1963 entitled "How to Draw a Graph". Tutte's motivation was mathematical, and his paper can be seen as a contribution to the long tradition of geometric representations of combinatorial objects.
Over the following 40-odd years, the motivation for creating visual representations of graphs has changed from mathematical curiosity to visual analytics. Current demand for graph drawing methods is now high, because of the potential for more human-comprehensible visual forms in industries as diverse as biotechnology, homeland security and sensor networks. Many new methods have been proposed, tested, implemented, and found their way into commercial tools. This paper describes two strands of this history: the force directed approach, and the planarity approach. Both approaches originate in Tutte's paper.
Further, we demonstrate number of methods for graph visualization that can be derived from the weighted version of Tutte's method. These include results on clustered planar graphs, edge-disjoint paths, an animation method, interactions such as adding/deleting vertices/edges and a focus-plus-context view method.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
Presenters: Judy-anne Osborn, Ben Brawn, Mick Gladys.
Eric Mazur is a Harvard physicist who has become known for the strategies that he introduced for teaching large first year service (physics) classes, in such a way that seems to improve the students' conceptual understanding of the material whilst not hurting their exam performance. The implementation of the ideas include the use of clicker-like technology (Mick Gladys will talk about his own implementation using mobile phones) as well as lower tech card-based analogues. We will screen a Youtube video showing Professor Mazur explain his ideas, and then describe how we have adapted some of them in maths and physics.
In trajectory optimization, the optimal path of a flight system or a group of flight systems is searched for, often in an interplanetary setting: we are in search of trajectories for one or more spacecrafts. On the one hand, this is a well-developed field of research, in which commercial software packages are already available for various scenarios. On the other hand, the computation of such trajectories can be rather demanding, especially when low-thrust missions with long travel times (e.g., years) are considered. Such missions invariably involve gravitational slingshot maneuvers at various celestial bodies in order to save propellant or time. Such maneuvers involve vastly different time scales: years of coasting can be followed by course corrections on a daily basis. In this talk, we give an overview over trajectory optimization for space vehicles and highlight some recent algorithmic developments.
You are invited to a celebration of the 21st anniversary of the Factoring Lemma. This lemma was the key to solving some long-standing open problems, and was the starting point of an investigation of totally disconnected, locally compact groups that has ensued over the last 20 years. In this talk, the life of the lemma will described from its conception through to a very recent strengthening of it. It will be described at a technical level, as well as viewed through its relationships with topology, geometry, combinatorics, algebra, linear algebra and research grants.
A birthday cake will be served afterwards.
Please make donations to the Mathematics Prize Fund in lieu of gifts.
Given a set T of the Euclidean space, whose elements are called sites, and a particular site s, the Voronoi cell of s is the set formed by all points closer to s than to any other site. The Voronoi diagram of T is the family of Voronoi cells of all the elements of T. In this talk we show some applications of the Voronoi diagrams of finite and infinite sets and analyze direct and inverse problems concerning the cells. We also discuss the stability of the cells under different types of perturbations and the effect of assigning weights to the sites.
Geodesic metric spaces provide a setting in which we can develop much of nonlinear, and in particular convex, analysis in the absence of any natural linear structure. For instance, in a state space it often makes sense to speak of the distance between two states, or even a chain of connecting intermediate states, whereas the addition of two states makes no sense at all.
We will survey the basic theory of geodesic metric spaces, and in particular Gromov's so called CAT($\kappa$) spaces. And if there is time (otherwise in a later talk), we will examine some recent results concerning alternating projection type methods, principally the Douglas--Rachford algorithm, for solving the two set feasibility problem in such spaces.
In a recent referee report, the referee said he/she could not understand the proofs of either of the two main results. Come and judge for yourself! This is joint work with Darryn Bryant and Don Kreher.
Complex (and Dynamical) Systems
A Data-Based View of Our World
Population censuses and the human face of Australia
Scientific Data Mining
Earth System Modeling
Mitigating Natural Disaster Risk
Sustainability – Environmental modelling
BioInvasion and BioSecurity
Realising Our Subsurface Potential
Abstract submission closes 31st May, 2013.
For more information, visit the conference website.
Roughly speaking, an automorphism $a$ of a graph $G$ is geometric if there is a drawing $D$ of $G$ such that $a$ induces a symmetry of $D$; if $D$ is planar then a is planar. In this talk we discuss geometric and planar automorphisms. In particular we sketch a linear time algorithm for finding a planar drawing of a planar graph with maximum symmetry.
We show that a combination of two simple preprocessing steps would generally improve the conditioning of a homogeneous system of linear inequalities. Our approach is based on a comparison among three different notions of condition numbers for linear inequalities.
The talk is based on a joint work with Javier Peña and Negar Soheili (Carnegie-Mellon University).
Overview of Course Content>
The classical regularity theory is centred around the implicit and Lyusternik-Graves theorems, on the one hand, and the Sard theorem and transversality theory, on the other. The theory (and a number of its applications to various problems of variational analysis) to be discussed in the course deals with similar problems for non-differentiable and set-valued mappings. This theory grew out of demands that came from needs of (mainly) optimization theory and subsequent understanding that some key ideas of the classical theory can be naturally expressed in purely metric terms without mention of any linear and/or differentiable structures.
Topics to be covered
The "theory" part of the course consists of five sections:
Formally, for understanding of the course basic knowledge of functional analysis plus some acquaintance with convex analysis and nonlinear analysis in Banach spaces (e.g. Frechet and Gateaux derivatives, implicit function theorem) will be sufficient. Understanding of the interplay between analytic and geometric concepts would be very helpful.
I will explain how the probabalistic method can be used to obtain lower bounds for the Hadamard maximal determinant problem, and outline how the Lovasz local lemma (Alon and Spencer, Corollary 5.1.2) can be used to improve the lower bounds.
This is a continuation of last semester's lectures on the probabilistic method, but is intended to be self-contained.
The finite element method has become the most powerful approach in approximating solutions of partial differential equations arising in modern engineering and physical applications. We present some efficient finite element methods for Reissner-Mindlin, biharmonic and thin plate equations.
In the first part of the talk I present some applied partial differential equations, and introduce the finite element method using the biharmonic equation. In the second part of the talk I will discuss about the finite element method for Reissner-Mindlin, biharmonic and thin plate spline equations in a unified framework.
Yes! Finally there is some discrete maths in the high school curriculum! Well, perhaps.
In this talk I will go over the inclusion of discrete mathematics content in the new national curriculum, the existing plans for its implementation, what this will mean for high school teachers, and brainstorm ideas for helping out, if they need our help. I will also talk about "This is Megamathematics" and perhaps, if we have time, we can play a little bit with "Electracity".
This talk deals with problems that are asymptotically related to best-packing and best-covering. In particular, we discuss how to efficiently generate N points on a d-dimensional manifold that have the desirable qualities of well-separation and optimal order covering radius, while asymptotically having a prescribed distribution. Even for certain small numbers of points like N=5, optimal arrangements with regard to energy and polarization can be a challenging problem.
I will report on work I performed with Jim Zhu over the past three years on how to exploit different forms of symmetry in variational analysis. Various open problems will be flagged.
This talk is available at http://carma.newcastle.edu.au/jon/symva-talk.pdf and the related paper is at http://carma.newcastle.edu.au/jon/symmetry.pdf. It has recently appeared in Advances in Nonlinear Analysis.
I will discuss Symmetric criticality and the Mountain pass lemma. I will provide the needed background for anyone who did not come to Part 1.
This talk is available at http://carma.newcastle.edu.au/jon/symva-talk.pdf and the related paper is at http://carma.newcastle.edu.au/jon/symmetry.pdf. It has recently appeared in Advances in Nonlinear Analysis.
Do you every wonder what goes on behind the closed doors of some of your professors? Or colleagues? What kind of stuff can I do for my Honours degree? Or my RHD studies? Well, let these wonders cease!
This sequence of talks will expose the greatest (mathematical) desires of mathematicians at Newcastle, highlighting several areas of current research from the purest of the pure to the most applicable of the applied. Talks will aim to be accessible to undergraduates (mostly), or anyone with a desire to learn more mathematics.
Program:The feasibility problem associated with nonempty closed
convex sets $A$ and $B$ is to find some $x\in A \cap B$.
Projection algorithms in general aim to compute such a point.
These algorithms play key roles in optimization and have many applications outside mathematics - for example in medical imaging.
Until recently convergence results were only available in the setting of linear spaces (more particularly, Hilbert spaces) and where the two sets are closed and convex.
The extension into geodesic metric spaces allows their use in spaces where there is no natural linear structure, which is the case for instance in tree spaces, state spaces, phylogenomics and configuration spaces for robotic movements.
After reviewing the pertinent aspects of CAT(0) spaces introduced in Part I, including results for von Neumann's alternating projection method, we will focus on the Douglas-Rachford algorithm, in CAT(0) spaces. Two situations arise; spaces with constant curvature and those with non-constant curvature. A prototypical space of the later kind will be introduced and the behavior of the Douglas-Rachford algorithm within it examined.
Do you every wonder what goes on behind the closed doors of some of your professors? Or colleagues? What kind of stuff can I do for my Honours degree? Or my RHD studies? Well, let these wonders cease!
This sequence of talks will expose the greatest (mathematical) desires of mathematicians at Newcastle, highlighting several areas of current research from the purest of the pure to the most applicable of the applied. Talks will aim to be accessible to undergraduates (mostly), or anyone with a desire to learn more mathematics.
Programme:The talk will be about new results on modular forms obtained by the speaker in collaboration with Shaun Cooper.
Universities are facing a tumultuous time with external regulation through TEQSA and the rise of MOOCs (Massive Open Online Courses). Disciplines within universities face the challenge of doing research, as well as producing a range of graduates capable of undertaking diverse careers. These are not new challenges. The emergence of MOOCs has raised the question, 'Why go to a University?' These tumultuous times provide a threat as well as an opportunity. How do we balance our activities? Does teaching and learning need to be re-conceptuliased? Is it time to seriously consider the role of education and the 'value-add' university education provides? This talk will provide snapshots of work that demonstrate the value-add universities do provide. Evidence is used to challenge current understandings and to chart a way forward.
The aim of this Douglas-Rachford brainstorming session to discuss:
-New applications and large scale experiments
-Diagnosing and profiling successful non-convex applications
-New conjectures
-Anything else you may think is relevant
Let spt(n) denote the number of smallest parts in the partitions of n. In 2008, Andrews found surprising congruences for the spt-function mod 5, 7 and 13. We discuss new congruences for spt(n) mod powers of 2. We give new generating function identities for the spt-function and Dyson's rank function. Recently with Andrews and Liang we found a spt-crank function that explains Andrews spt-congruences mod 5 and 7. We extend these results by finding spt-cranks for various overpartition-spt-functions of Ahlgren, Bringmann, Lovejoy and Osburn. This most recent work is joint with Chris Jennings-Shaffer.
Image processing research is dominated, to a considerable degree, by linear-additive models of images. For example, wavelet decompositions are very popular both with experimentalists and theoreticians primarily because of their neatly convergent properties. Fourier and orthogonal series decompositions are also popular in applications, as well as playing an important part in the analysis of wavelet methods.
Multiplicative decomposition, on the other hand, has had very little use in image processing. In 1-D signal processing and communication theory it has played a vital part (amplitude, phase, and frequency modulations of communications theory especially).
In many cases 2-D multiplicative decompositions have just been too hard to formulate or expand. Insurmountable problems (divergences) often occur as the subtle consequences of unconscious errors in the choice of mathematical structure. In my work over the last 17 years I've seen how to overcome some of the problems in 2-D, and the concept of phase is a central, recurring theme. But there is still so much more to be done in 2-D and higher dimensions.
This talk will be a whirlwind tour of some main ideas and applications of phase in imaging.
(Joint work with Konrad Engel and Martin Savelsbergh)
In an incremental network design problem we want to expand an existing network over several time periods, and we are interested in some quality measure for all the intermediate stages of the expansion process. In this talk, we look at the following simple variant: In each time period, we are allowed to add a single edge, the cost of a network is the weight of a minimum spanning tree, and the objective is to minimize the sum of the costs over all time periods. We describe a greedy algorithm for this problem and sketch a proof of the fact that it provides an optimal solution. We also indicate that incremental versions of other basic network optimization problems (shortest path and maximum flow) are NP-hard.
In his deathbed letter to G.H. Hardy, Ramanujan gave a vague definition of a mock modular function: at each root of unity its asymptotics matches the one of a modular form, though a choice of the modular function depends on the root of unity. Recently Folsom, Ono and Rhoades have proved an elegant result about the match for a general family related to Dyson’s rank (mock theta) function and the Andrews—Garvan crank (modular) function. In my talk I will outline some heuristics and elementary ingredients of the proof.
Joint work with David Wood (Monash University, Australia) and Eran Nevo (Ben-Gurion University of the Negev, Israel).
The maximum number of vertices of a graph of maximum degree $\Delta\ge 3$ and diameter $k\ge 2$ is upper bounded by $\Delta^{k}$. If we restrict our graphs to certain classes, better upper bounds are known. For instance, for the class of trees there is an upper bound of $2\Delta^{\lfloor k/2\rfloor}$. The main result of this paper is that, for large $\Delta$, graphs embedded in surfaces of bounded Euler genus $g$ behave like trees. Specifically, we show that, for large $\Delta$, such graphs have orders bounded from above by
\begin{cases} (c_0g+c_1)\Delta^{\lfloor k/2\rfloor} & \text{if $k$ is even}\\
(c_0g^2+c_1)\Delta^{\lfloor k/2\rfloor} & \text{if $k$ is odd}
\end{cases}
where $c_0,c_1$ are absolute constants.
With respect to lower bounds, we construct graphs of Euler genus $g$, odd diameter and orders $(c_0\sqrt{g}+c_1)\Delta^{\lfloor k/2\rfloor}$, for absolute constants $c_0,c_1$.
Our results answer in the negative a conjecture by Miller and Širáň (2005). Before this paper, there were constructions of graphs of Euler genus $g$ and orders $c_0\Delta^{\lfloor k/2\rfloor}$ for an absolute constant $c_0$. Also, Šiagiová and Simanjuntak (2004) provided an upper bound of $(c_0g+c_1)k\Delta^{\lfloor k/2\rfloor}$ with absolute constants $c_0,c_1$.
I will talk about the metrical theory of Diophantine approximation associated with linear forms that are simultaneously small in terms of absolute value rather than the classical nearest integer norm. In other words, we consider linear forms which are simultaneously close to the origin. A complete Khintchine-Groshev type theorem for monotonic approximating functions is established within the absolute value setup. Furthermore, the Hausdorff measure generalization of the Khintchine-Groshev type theorem is obtained. As a consequence we obtain the complete Hausdorff dimension theory. Staying within the absolute value setup, we prove that the corresponding set of badly approximable vectors is of full dimension.
The degree/diameter problem is to find the largest possible order of a graph (or digraph) with given maximum degree (or maximum out-degree) and given diameter. This is one of the unsolved problems in Extremal Graph Theory. Since the general problem is difficult many variations of the problem have been considered, including bipartite, vertex-transitive, mixed, planar, etc.
This talk is part of a series started in May. The provisional schedule is
Random matrix theory has undergone significant theoretical progress in the last two decades, including proofs on universal behaviour of eigenvalues as the matrix dimension becomes large, and a deep connection between algebraic manipulations of random matrices and free probability theory. Underlying many of the analytical advances are tools from complex analysis. By developing numerical versions of these tools, it is now possible to calculate random matrix statistics to high accuracy, leading to new conjectures on the behaviour of random matrices. We overview recent advances in this direction.
An exact bucket indexed (BI) mixed integer linear programming formulation for nonpreemptive single machine scheduling problems is presented that is a result of an ongoing investigation into strategies to model time in planning applications with greater efficacy. The BI model is a generalisation of the classical time indexed (TI) model to one in which at most two jobs can be processing in each time period. The planning horizon is divided into periods of equal length, but unlike the TI model, the length of a period is a parameter of the model and can be chosen to be as long as the processing time of the shortest job. The two models are equivalent if the problem data are integer and a period is of unit length, but when longer periods are used in the BI model, it can have significantly fewer variables and nonzeros than the TI model at the expense of a greater number of constraints. A computational study using weighted tardiness instances reveals the BI model significantly outperforms the TI model on instances where the mean processing time of the jobs is large and the range of processing times is small, that is, the processing times are clustered rather than dispersed.
Joint work with Natashia Boland and Riley Clement.
TBA
20 minute presentation followed by 10 minutes of questions and discussion.
Joint work with M. Mueller, B. O'Donoghue, and Y. Wang
We consider dynamic trading of a portfolio of assets in discrete periods over a finite time horizon, with arbitrary time-varying distribution of asset returns. The goal is to maximize the total expected revenue from the portfolio, while respecting constraints on the portfolio such as a required terminal portfolio and leverage and risk limits. The revenue takes into account the gross cash generated in trades, transaction costs, and costs associated with the positions, such as fees for holding short positions. Our model has the form of a stochastic control problem with linear dynamics and convex cost function and constraints. While this problem can be tractably solved in several special cases, such as when all costs are convex quadratic, or when there are no transaction costs, our focus is on the more general case, with nonquadratic cost terms and transaction costs.
We show how to use linear matrix inequality techniques and semidefinite programming to produce a quadratic bound on the value function, which in turn gives a bound on the optimal performance. This performance bound can be used to judge the performance obtained by any suboptimal policy. As a by-product of the performance bound computation, we obtain an approximate dynamic programming policy that requires the solution of a convex optimization problem, often a quadratic program, to determine the trades to carry out in each step. While we have no theoretical guarantee that the performance of our suboptimal policy is always near the performance bound (which would imply that it is nearly optimal) we observe that in numerical examples the two values are typically close.
In many problems in control, optimal and robust control, one has to solve global
optimization problems of the form: $\mathbf{P}:f^\ast=\min_{\mathbf x}\{f(\mathbf x):\mathbf x\in\mathbf K\}$, or, equivalently, $f^\ast=\max\{\lambda:f-\lambda\geq0\text{ on }\mathbf K\}$, where $f$ is a polynomial (or even a semi-algebraic function) and $\mathbf K$ is a basic semi-algebraic set. One may even need solve the "robust" version $\min\{f(\mathbf x):\mathbf x\in\mathbf K;h(\mathbf x,\mathbf u)\geq0,\forall \mathbf u\in\mathbf U\}$ where $\mathbf U$ is a set of parameters. For
instance, some static output feedback problems can be cast as polynomial optimization
problems whose feasible set $\mathbf K$ is defined by a polynomial matrix inequality (PMI). And
robust stability regions of linear systems can be modeled as parametrized polynomial
matrix inequalities (PMIs) where parameters $\mathbf u$ account for uncertainties and (decision)
variables x are the controller coefficients.
Therefore, to solve such problems one needs tractable characterizations of polynomials
(and even semi-algebraic functions) which are nonnegative on a set, a topic of independent
interest and of primary importance because it also has implications in many other areas.
We will review two kinds of tractable characterizations of polynomials which are non-negative on a basic closed semi-algebraic set $\mathbf K\subset\mathbb R^n$. The first type of characterization is
when knowledge on $\mathbf K$ is through its defining polynomials, i.e., $\mathbf K=\{\mathbf x:g_j(\mathbf x)\geq 0, j =1,\dots, m\}$, in which case some powerful certificates of positivity can be stated in terms of some sums of squares (SOS)-weighted representation. For instance, this allows to define a hierarchy fo semidefinite relaxations which yields a monotone sequence of lower bounds
converging to $f^\ast$ (and in fact, finite convergence is generic). There is also another way
of looking at nonnegativity where now knowledge on $\mathbf K$ is through moments of a measure
whose support is $\mathbf K$. In this case, checking whether a polynomial is nonnegative on $\mathbf K$
reduces to solving a sequence of generalized eigenvalue problems associated with a count-
able (nested) family of real symmetric matrices of increasing size. When applied to $\mathbf P$, this
results in a monotone sequence of upper bounds converging to the global minimum, which
complements the previous sequence of upper bounds. These two (dual) characterizations
provide convex inner (resp. outer) approximations (by spectrahedra) of the convex cone
of polynomials nonnegative on $\mathbf K$.
UPDATE: Abstract submission is now open.
The main thrust of this workshop will be exploring the interface between important methodological areas of infectious disease modelling. In particular, two main themes will be explored: the interface between model-based data analysis and model-based scenario analysis, and the relationship between agent-based/micro-simulation and modelling.
I will discuss some models of what a "random abelian group" is, and some conjectures (the Cohen-Lenstra heuristics of the title) about how they show up in number theory. I'll then discuss the function field setting and a proof of these heuristics, with Ellenberg and Westerland. The proof is an example of a link between analytic number theory and certain classes of results in algebraic topology ("homological stability").
It is well known that the Moore digraph, namely a diregular digraph of degree d, diameter k and order 1 + d + d 2 + ... + d k , only exists if d = 1 or k = 1. Let (d,k)-digraph be a diregular digraph of degree d ≥ 2, diameter k ≥ 2 and order d+d 2 +...+d k , one less than the Moore bound. Such a (d,k)-digraph is also called an almost Moore digraph.
The study of the existence of an almost Moore digraph of degree d and diameter k has received much attention. Fiol, Allegre and Yebra (1983) showed the existence of (d,2)-digraphs for all d ≥ 2. In particular, for d = 2 and k = 2, Miller and Fris (1988) enumerated all non-isomophic (2,2)-digraphs. Furthermore, Gimbert (2001) showed that there is only one (d,2)-digraph for d ≥ 3. However for de- gree 2 and diameter k ≥ 3, it is known that there is no (2,k)-digraph (Miller and Fris, 1992). Furthermore, it was proved that there is no (3,k)-digraph with k ≥ 3 (Baskoro, Miller, Siran and Sutton, 2005). Recently, Conde, Gimbert, Gonzáles, Miret, and Moreno (2008 & 2013) showed that no (d,k)-digraphs exist for k = 3,4 and for any d ≥ 2. Thus, the remaining case still open is the existence of (d,k)- digraphs with d ≥ 4 and k ≥ 5.
Several necessary conditions for the existence of (d,k)-digraphs, for d ≥ 4 and k ≥ 5, have been obtained. In this talk, we shall discuss some necessary conditions for these (d,k)-digraphs. Open problems related to this study are also presented.
Joint work with N. Parikh, E. Chu, B. Peleato, and J. Eckstein
Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features, training examples, or both. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. We argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, and support vector machines.
The related paper, code and talk slides are available at http://www.stanford.edu/~boyd/papers/admm_distr_stats.html.
It was understood by Minkowski that one could prove interesting results in number theory by considering the geometry of lattices in R(n). (A lattice is simply a grid of points.) This technique is called the "geometry of numbers". We now understand much more about analysis and dynamics on the space of all lattices, and this has led to a deeper understanding of classical questions. I will review some of these ideas, with emphasis on the dynamical aspects.
There exist a variety of mechanisms to share indivisible goods between agents. One of the simplest is to let the agents take turns to pick an item. This mechanism is parameterized by a policy, the order in which agents take turns. A simple model of this mechanism was proposed by Bouveret and Lang in 2011. We show that in their setting the natural policy of letting the agents alternate in picking items is optimal. We also present a number of potential generalizations and extensions.
This is joint work with Nina Narodytska and Toby Walsh.
TBA
Within a nonzero, real Banach space we study the problem of characterising a maximal extension of a monotone operator in terms of minimality properties of representative functions that are bounded by the Penot and Fitzpatrick functions. We single out a property of the space of representative functions that enable a very compact treatment of maximality and pre-maximality issues. As this treatment does not assume reflexivity and we characterises this property the existence of a counter example has a number of consequences for the search for a suitable certificate for maximality in non-reflexive spaces. In particular one is lead to conjecture that some extra side condition to the usual CQ is inevitable. We go on to look at the simplest such condition which is boundedness of the domain of the monotone operator and obtain some positive results.
Many successful non-convex applications of the Douglas-Rachford method can be viewed as the reconstruction of a matrix, with known properties, from a subset of its entries. In this talk we discuss recent successful applications of the method to a variety of (real) matrix reconstruction problems, both convex and non-convex.
This is joint work with Fran Aragón and Matthew Tam.
I will report on recent joint work (with J.Y. Bello Cruz, H.M. Phan, and X. Wang) on the Douglas–Rachford algorithm for finding a point in the intersection of two subspaces. We prove that the method converges strongly to the projection of the starting point onto the intersection. Moreover, if the sum of the two subspaces is closed, then the convergence is linear with the rate being the cosine of the Friedrichs angle between the subspaces. Our results improve upon existing results in three ways: First, we identify the location of the limit and thus reveal the method as a best approximation algorithm; second, we quantify the rate of convergence, and third, we carry out our analysis in general (possibly infinite-dimensional) Hilbert space. We also provide various examples as well as a comparison with the classical method of alternating projections.
Extremal graph theory includes problems of determining the maximum number of edges in a graph on $n$ vertices that contains no forbidden subgraphs. We consider only simple graphs with no loops or multiple edges and the forbidden subgraphs under consideration are cycles of length 3 and 4 (triangle and square). This problem was proposed by Erdos in 1975. Let $n$ denote the number of vertices in a graph $G$. By $ex(n; {C3,C4})$, or simply $ex(n;4)$ we mean the maximum number of edges in a graph of order $n$ and girth at least $g \geq 5$. There are only 33 exact values of $ex(n;4)$ currently known. In this talk, I give an overview of the current state of research in this problem, regarding the exact values, as well as the lower bound and the upper bound of the extremal numbers when the exact value is not known.
Popular accounts of evolution typically create an expectation that populations become ever better adapted over time, and some formal treatments of evolutionary processes suggest this too. However, such analyses do not highlight the fact that competition with conspecics has negative population-level consequences too, particularly when individuals invest in success in zero-sum games. My own work is at the interface of theoretical biology and empirical data, and I will discuss several examples where an adaptive evolutionary process leads to something that appears silly from the population point of view, including a heightened risk of extinction in the Gouldian finch, reduced productivity of species in which males do not participate in parental care, and deterministic extinction of local populations in systems that feature sexual parasitism.
Recently a great deal of attention from biologists has been directed to understanding the role of knots in perhaps the most famous of long polymers - DNA. In order for our cells to replicate, they must somehow untangle the approximately two metres of DNA that is packed into each nucleus. Biologists have shown that DNA of various organisms is non-trivially knotted with certain topologies preferred over others. The aim of our work is to determine the "natural" distribution of different knot-types in random closed curves and compare that to the distributions observed in DNA.
Our tool to understand this distribution is a canonical model of long chain polymers - self-avoiding polygons (SAPs). These are embeddings of simple closed curves into a regular lattice. The exact computation of the number of polygons of length n and fixed knot type K is extremely difficult - indeed the current best algorithms can barely touch the first knotted polygons. Instead of exact methods, in this talk I will describe an approximate enumeration method - which we call the GAS algorithm. This is a generalisation of the famous Rosenbluth method for simulating linear polymers. Using this algorithm we have uncovered strong evidence that the limiting distribution of different knot-types is universal. Our data shows that a long closed curve is about 28 times more likely to be a trefoil than a figure-eight, and that the natural distribution of knots is quite different from those found in DNA.
Let $G$ be a connected graph with vertex set $V$ and edge set $E$. The distance $d(u,v)$ between two vertices $u$ and $v$ in $G$ is the length of a shortest $u-v$ path in $G$. For an ordered set $W = \{w_1, w_2, ..., w_k\}$ of vertices and a vertex $v$ in a connected graph $G$, the code of $v$ with respect to $W$ is the $k$-vector \begin{equation} C_W(v)=(d(v,w_1),d(v,w_2), ..., d(v,w_k)). \end{equation} The set $W$ is a resolving set for $G$ if distinct vertices of $G$ have distinct codes with respect to $W$. A resolving set for $G$ containing a minimum number of vertices is called a minimum resolving set or a basis for $G$. The metric dimension, denoted, $dim(G)$ is the number of vertices in a basis for $G$. The problem of finding the metric dimension of an arbitrary graph is NP-complete.
The problem of finding minimum metric dimension is NP-complete for general graphs. Manuel et al. have proved that this problem remains NP-complete for bipartite graphs. The minimum metric dimension problem has been studied for trees, multi-dimensional grids, Petersen graphs, torus networks, Benes and butterfly networks, honeycomb networks, X-trees and enhanced hypercubes.
These concepts have been extended in various ways and studied for different subjects in graph theory, including such diverse aspects as the partition of the vertex set, decomposition, orientation, domination, and coloring in graphs. Many invariants arising from the study of resolving sets in graph theory offer subjects for applicable research.
The theory of conditional resolvability has evolved by imposing conditions on the resolving set. This talk is to recall the concepts and mention the work done so far and future work.
The rough Cayley graph is the analogue in the context of topological groups of the standard Cayley graph, which is defined for finitely generated group. It will be shown how it is possible to associate such a graph to a compactly generated totally disconnected and locally compact (t.d.l.c.) group and how the rough Cayley graph represents an important tool to study the structure of this kind of group.
We analyse local combinatorial structure in product sets of two subsets of a countable group which are "large" with respect to certain classes (not necessarily invariant) means on the group. As an example of such phenomenon, we can mention the result by Bergelson, Furstenberg and Weiss which says that the sumset of two sets of positive density in integers contains locally an almost-periodic set. In this theorem, large sets are the sets of positive density, and a combinatorial structure is an almost periodic set.
How do a student’s attitude, learning behaviour and achievement in mathematics or statistics relate to each other and how do these change during the course of their undergraduate degree program? These are some of the questions I have been addressing in a longitudinal study that I have undertaken as part of my PhD research. The questions were addressed by soliciting comments from students several times during their undergraduate degree programs; through an initial attitude survey, course-specific surveys for up to two courses each semester and interviews with students near the end of their degrees. In this talk I will introduce you to the attitudes and learning behaviours of the mathematics students I followed through the three years of my research, and discuss their responses to the completed surveys (attitude and course-specific). To illuminate the general responses obtained from the surveys (1074 students completed the initial attitude survey and 645 course-specific surveys were completed), I will also introduce you to Tom, Paul, Kate and Ben, four students of varying degrees of achievement, who I interviewed near the end of their mathematics degrees.
The split feasibility problem (SFP) consists in finding a point in a closed convex subset of a Hilbert space such that its image under a bounded linear operator belongs to a closed convex subset of another Hilbert space. Since its inception in 1994 by Censor and Elfving, it has received much attention thanks mainly to its applications to signal processing and image reconstruction. Iterative methods can be employed to solve the SFP. One of the most popular iterative method is Byrne's CQ algorithm. However, this algorithm requires prior knowledge (or at least an estimate) of the norm of the bounded linear operator. We introduce a stepsize selection method so that the implementation of the CQ algorithm does not need any prior information regarding the operator norm. Furthermore, a relaxed CQ algorithm, where the two closed convex sets are both level sets of convex functions, and a Halpern-type algorithm are studied under the same stepsize rule, yielding both weak and strong convergence. A more general problem, the Multiple-sets split feasibility problem, will be also presented. Numerical experiments are included to illustrate the applications to signal processing and, in particular, to compressed sensing and wavelet-based signal restoration.
Based on joint works with G. López and H-K Xu.
Our goal is to estimate the rate of growth of a population governed by a simple stochastic model. We may choose (n) sampling times at which to count the number of individuals present, but due to detection difficulties, or constraints on resources, we are able only to observe each individual with fixed probability (p). We discuss the optimal sampling times at which to make our observations in order to approximately maximize the accuracy of our estimation. To achieve this, we maximize the expected volume of information obtained from such binomial observations, that is the Fisher Information. For a single sample, we derive an explicit form of the Fisher Information. However, finding the Fisher Information for higher values of (n) appears intractable. Nonetheless, we find a very good approximation function for the Fisher Information by exploiting the probabilistic properties of the underlying stochastic process and developing a new class of delayed distributions. Both numerical and theoretical results strongly support this approximation and confirm its high level of accuracy.
A numerical method is proposed for constructing an approximation of the Pareto front of nonconvex multi-objective optimal control problems. First, a suitable scalarization technique is employed for the multi-objective optimal control problem. Then by using a grid of scalarization parameter values, i.e., a grid of weights, a sequence of single-objective optimal control problems are solved to obtain points which are spread over the Pareto front. The technique is illustrated on problems involving tumor anti-angiogenesis and a fed-batch bioreactor, which exhibit bang–bang, singular and boundary types of optimal control. We illustrate that the Bolza form, the traditional scalarization in optimal control, fails to represent all the compromise, i.e., Pareto optimal, solutions.
Joint work with Helmut Maurer.
C. Y. Kaya and H. Maurer, A numerical method for nonconvex multi-objective optimal control problems, Computational Optimization and Applications, (appeared online: September 2013, DOI 10.1007/s10589-013-9603-2)
In this talk I will discuss a method of finding simple groups acting on trees. I will discuss the theory behind this process and outline some proofs (time permitting).
The scale function plays a key role in the structure theory of totally disconnected locally compact (t.d.l.c.) groups. Whereas it is known that the scale function is continuous when acting on a t.d.l.c. group, analysis of the continuity of the scale in a wider context requires the topologization of the group of continuous automorphisms. Existing topologies for Aut(G) are outlined and shown to be insufficient for guaranteeing the continuity of the scale function. Possible methods of generalising these topologies are explored.
In this talk I will describe an algorithm to do a random walk in the space of all words equal to the identity in a finitely presented group. We prove that the algorithm samples from a well defined distribution, and using the distribution we can find the expected value for the mean length of a trivial word. We then use this information to estimate the cogrowth of the group. We ran the algorithm on several examples -- where the cogrowth series in known exactly our results are in agreement with the exact results. Running the algorithm on Thompson's group $F$, we see behaviour consistent with the hypothesis that $F$ is not amenable.
We propose and study a new method, called the Interior Epigraph Directions (IED)
method, for solving constrained nonsmooth and nonconvex optimization. The IED
method considers the dual problem induced by a generalized augmented Lagrangian
duality scheme, and obtains the primal solution by generating a sequence of
iterates in the interior of the dual epigraph. First, a deflected subgradient
(DSG) direction is used to generate a linear approximation to the dual
problem. Second, this linear approximation is solved using a Newton-like step.
This Newton-like step is inspired by the Nonsmooth Feasible Directions Algorithm
(NFDA), recently proposed by Freire and co-workers for solving unconstrained,
nonsmooth convex problems. We have modified the NFDA so that it takes advantage
of the special structure of the epigraph of the dual function. We prove that all
the accumulation points of the primal sequence generated by the IED method are
solutions of the original problem. We carry out numerical experiments by using
test problems from the literature. In particular, we study several instances of
the Kissing Number Problem, previously solved by various approaches such as an
augmented penalty method, the DSG method, as well as the popular differentiable
solvers ALBOX (a predecessor of ALGENCAN), Ipopt and LANCELOT. Our experiments
show that the quality of the solutions obtained by the IED method is comparable
with (and sometimes favourable over) those obtained by the other solvers mentioned.
Joint work with Wilhelm P. Freire and C. Yalcin Kaya.
This colloquium will explain some of the background and significance of the concept of amenability. Arguments with finite groups frequently, without remark, count the number of elements in a subset or average a function over the group. It is usually important in these arguments that the result of the calculation is invariant under translation. Such calculations cannot be so readily made in infinite groups but the concepts of amenability and translation invariant measure on a group in some ways take their place. The talk will explain this and also say how random walks relate to these same ideas.
The link to the animation of the paradoxical decomposition is here.
Times and Dates:
Mon 2 Dec 2013: 10-12, 2-4
Tue 3 Dec 2013: 10-12, 2-4
Wed 4 Dec 2013: 10-12, 2-4
Thu 5 Dec 2013: 10-12, 2-4
Abstract: This will be a short and fast introduction to the field of geometric group theory. Assumed knowledge is abstract algebra (groups and rings) and metric spaces. Topics to be covered include: free groups, presentations, quasiisometry, hyperbolic groups, Dehn functions, growth, amenable groups, cogrowth, percolation, automatic groups, CAT(0) groups, examples: Thompson's group F, self-similar groups (Grigorchuk group), Baumslag-Solitar groups.
In this talk, we provide some characterizations of ultramaximally monotone operators. We establish the Brezis--Haraux condition in the setting of a general Banach space. We also present some characterizations of reflexivity of a Banach space by a linear continuous ultramaximally monotone operator.
Joint work with Jon Borwein.
We develop an integer programming based decision support tool that quickly assesses the throughput of a coal export supply chain for a given level of demand. The tool can be used to rapidly evaluate a number of infrastructures for several future demand scenarios in order to identify a few that should be investigated more thoroughly using a detailed simulation model. To make the natural integer programming model computationally tractable, we exploit problem structure to reduce the number of variables and employ aggregation as well as disaggregation to strengthen the linear programming relaxation. Afterward, we implicitly reformulate the problem to exclude inherent symmetry in the original formulation and use Hall's marriage theorem to ensure its feasibility. Studying polyhedron structure of a sub-problem, we enhance the formulation by generating strong valid inequalities. The integer programming tool is used in a computational study in which we analyze system performance for different levels of demand to identify potential bottlenecks.
Psychologists and other experiment designers are often faced with the task of creating sets of items to be used in factorial experiments. These sets need to be as similar as possible to each other in terms of the items' given attributes. We name this problem Picking Items for Experimental Sets (PIES). In this talk I will discuss how similarity can be defined, mixed integer programs to solve PIES and heuristic methods.
I will also examine the popular integer programming heuristic, the feasibility pump. The feasibility pump aims to find an integer feasible solution for a MIP. I will be showing how using different projection algorithms, including Douglas-Rachford, added randomness and reformulating the projection spaces change the effectiveness of the heuristic.
A classical nonlinear PDE used for modelling heat transfer between concentric cylinders by fluid convection and also for modelling porous flow can be solved by hand using a low-order perturbation method. Extending this solution to higher order using computer algebra is surprisingly hard owing to exponential growth in the size of the series terms, naively computed. In the mid-1990's, so-called "Large Expression Management" tools were invented to allow construction and use of so-called "computation sequences" or "straight-line programs" to extend the solution to 11th order. The cost of the method was O(N^8) in memory, high but not exponential.
Twenty years of doubling of computer power allows this method to get 15 terms. A new method, which reduces the memory cost to O(N^4), allows us to compute to N=30. At this order, singularities can reliably be detected using the quotient-difference algorithm. This allows confident investigation of the solutions, for different values of the Prandtl number.
This work is joint with Yiming Zhang (PhD Oct 2013).
We consider a problem of minimising $f_1(x)+f_2(y)$ over $x \in X \subseteq R^n$ and $y \in Y \subseteq R^m$ subject to a number of extra coupling constraints of the form $g_1(x) g_2(y) \geq 0$. Due to these constraints, the problem may have a large number of local minima. For any feasible combination of signs of $g_1(x)$ and $g_2(y)$, the coupled problem is decomposable, and the resulting two problems are assumed to be easily solved. An approach to solving the coupled problem is presented. We apply it to solving coupled monotonic regression problems arising in experimental psychology.
I will review the creation and development of the concept of number and the role of visualisation in that development. The relationship between innate human capabilities on the one hand and mathematical research and education on the other will be discussed.
In this seminar I will review my recent work into Hankel determinants and their number theoretic uses. I will briefy touch on the p-adic evaluation of a particular determinant and comment on how Hankel determinants together with Padé approximants can be used in some irrationality proofs. A fundamental determinant property will be demonstrated and I will show what implications this holds for positive Hankel determinants and where we might go from here.
The previous assessment method for MCHA2000 - Mechatronic Systems (which is common to many other courses) allowed students collect marks from assessments and quizzes during the semester and pass the course without reaching a satisfactory level of competency in some topics. In 2013, we obtained permission from the President of Academic Senate to test a different assessment scheme that aimed at preventing students from passing without attaining a minimum level of competency in all topics of the course. This presentation discusses the assessment scheme tested and the results we obtained, which suggest that the proposed scheme makes a difference.
MCHA2000 is a course about modelling, simulation, and analysis of physical system dynamics. It is believed that the proposed model is applicable to other courses.
Bio: A/Prof Tristan Perez, Lecturer of MCHA2000. http://www.newcastle.edu.au/profile/tristan-perez
I'll discuss the analytic solution to the limit shape problem for random domino tilings and "lozenge" tilings, and in particular try to explain how these limiting surfaces develop facets.
It is now known for a number of models of statistical physics in two dimensions (such as percolation or the Ising model) that at their critical point, they do behave in a conformally invariant way in the large-scale limit, and do give rise in this limit to random fractals that can be mathematically described via Schramm's Stochastic Loewner Evolutions.
The goal of the present talk will be to discuss some aspects of what remains valid or should remain valid about such models and their conformal invariance, when one looks at them within a fractal-type planar domain. We shall in particular describe (and characterize) a continuous percolation interface within certain particular random fractal carpets. Part of this talk will be based on joint work with Jason Miller and Scott Sheffield.
Liz will talk about how the UoN could make more use of the flipped classroom. The flipped classroom is an approach where content is provided in advance to students and instead of the traditional lecture the time is spent interacting with students through worked examples etc.
Liz will examine impacts on student learning, but also consider how to make this approach manageable to staff workloads and how lecture theatres design can be altered to facilitate this new way of learning.
Polyhedral links, interlinked and interlocked architectures, have been proposed for the description and characterization of DNA and protein polyhedra. Chirality is a very important feature for biomacromolecules. In this talk, we discuss the topological chirality of a type of DNA polyhedral links constructed by the strategy of "n-point stars and a type of protein polyhedral links constructed by "three-cross curves" covering. We shall ignore DNA sequence and use the orientation of the 2 backbone strands of the dsDNA to orient DNA polyhedral links, thus consider DNA polyhedral links as oriented links with antiparallel orientations. We shall ignore protein sequence and view protein polyhedral links as unoriented ones. It is well known that there is a correspondence between alternating links and plane graphs. We prove that links corresponding to bipartite plane graphs have antiparallel orientations, and under these orientations, their writhes are not zero. As a result, the type of right-handed double crossover 4-turn DNA polyhedral links are topologically chiral. We also prove that the unoriented link corresponding to a connected, even, bipartite plane graph has self-writhe 0 and using the Jones polynomial we present a criterion for chirality of unoriented alternating links with self-writhe 0. By applying this criterion we obtain that 3-regular protein polyhedral links are also topologically chiral. Topological chirality always implies chemical chirality, hence the corresponding DNA and protein polyhedra are all chemically chiral. Our chiral criteria may be used to detect the topological chirality of more complicated DNA and protein polyhedral links to be synthesized by chemists and biologists in the future.
Jonathan Kress of UNSW will be talking about the UNSW experience of using MapleTA for online assignments in Mathematics over an extended period of time.
Ben Carter will be talking about some of the rationale for online assignments, how we're using MapleTA here, and our hopes for the future, including how we might use it as a basis for a flipped classroom approach to some of our teaching.
This talk will give an introduction to the Kelper-Coulomb and harmonic oscillator systems fundamental in both the classical and quantum worlds. These systems are related by "coupling constant metamorphosis", a remarkable trick that exchanges the energy of one system with the coupling constant of the other. The trick can be seen to be a type of conformal transformation, that is, a scaling of the underlying metric, that maps "conformal symmetries" to "true symmetries" of a Hamiltonian system.
In this talk I will explain the explain the statements above and discuss some applications of coupling constant metamorphosis to superintegrable systems and differential geometry.
A lattice rule with a randomly-shifted lattice estimates a mathematical expectation, written as an integral over the s-dimensional unit hypercube, by the average of n evaluations of the integrand, at the n points of the shifted lattice that lie inside the unit hypercube. This average provides an unbiased estimator of the integral and, under appropriate smoothness conditions on the integrand, it has been shown to converge faster as a function of n than the average at n independent random points (the standard Monte Carlo estimator). In this talk, we study the behavior of the estimation error as a function of the random shift, as well as its distribution for a random shift, under various settings. While it is well known that the Monte Carlo estimator obeys a central limit theorem when $n \rightarrow \infty$, the randomized lattice rule does not, due to the strong dependence between the function evaluations. We show that for the simple case of one-dimensional integrands, the limiting error distribution is uniform over a bounded interval if the integrand is non-periodic, and has a square root form over a bounded interval if the integrand is periodic. We find that in higher dimensions, there is little hope to precisely characterize the limiting distribution in a useful way for computing confidence intervals in the general case. We nevertheless examine how this error behaves as a function of the random shift from different perspectives and on various examples. We also point out a situation where a classical central-limit theorem holds when the dimension goes to infinity, we provide guidelines on when the error distribution should not be too far from normal, and we examine how far from normal is the error distribution in examples inspired from real-life applications.
Without convexity the convergence of a descent algorithm can normally only be certified in the weak sense that every accumulation point of the sequence of iterates is critical. This does not at all correspond to what we observe in practice, where these optimization methods always converge to a single limit point, even though convergence may sometimes be slow.
Around 2006 it has been observed that convergence to a single limit can be proved for objective functions having certain analytic features. The property which is instrumental here is called the Lojasiewicz inequality, imported from analytic function theory. While this has been successfully applied to smooth functions, the case of non-smooth functions turns out more difficult. In this talk we obtain some progress for upper-C1 functions. Then we proceed to show that this is not just out of a theoretical sandpit, but has consequences for applications in several fields. We sketch an application in destructive testing of laminate materials.
As is well-known semidefinite relaxations of discrete optimization problems can yield excellent bounds on their solutions. We present three examples from our collaborative research. The first addresses the quadratic assignment problem and a formulation is developed which yields the strongest lower bounds known for larger dimensions. Utilizing the latest iterative SDP solver and ideas from verified computing a realistic problem from communications is solved for dimensions up to 512.
A strategy based on the Lovasz theta function is generalized to compute upper bounds on the spherical kissing number utilizing SDP relaxations. Multiple precision SDP solvers are needed and improvements on known results for all kissing numbers in dimensions up to 23 are obtained. Finally, generalizing ideas of Lex Schrijver improved upper bounds for general binary codes are obtained in many cases.
Brad Pitt's zombie-attack movie "World War Z" may not seem like a natural jumping-off point for a discussion of mathematics or science, but in fact it was a request I received to review that movie in "The Conversation" and the review I wrote that led me to be invited to give a public lecture on zombies and maths at the Academy of Science next week. This week's colloquium will largely be a preview of that talk, so should be generally accessible.
My premise is that movies and maths have something in common. Both enable a trait which seems to be more highly developed in humans than in any other species, with profound consequences: the desire and capacity to explore possibility-space.
The same mathematical models can let us playfully explore how an outbreak of zombie-ism might play out, or how an outbreak of an infectious disease like measles would spread, depending, in part, on what choices we make. Where a movie gives us deep insight into one possibility, mathematics enables us to explore, at all once, millions of scenarios, and see where the critical differences lie.
I will try to use mathematical models of zombie outbreak to discuss how mathematical modelling and mathematical ideas such as functions and phase transitions might enter the public consciousness in a positive way.
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
Nowadays huge amounts of personal data are regularly collected in all spheres of life, creating interesting research opportunities but also a risk to individual privacy. We consider the problem of protecting confidentiality of records used for statistical analysis, while preserving as much of the data utility as possible. Since OLAP cubes are often used to store data, we formulate a combinatorial problem that models a procedure to anonymize 2-dimensional OLAP cubes. In this talk we present a parameterised approach to this problem.
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
New questions regarding the reliability and verifiability of scientific findings are emerging as computational methods are being increasingly used in research. In this talk I will present a framework for incorporating computational research into the scientific method, namely standards for carrying out and disseminating research to facilitate reproducibility. I will present some recent empirical results on data and code publication; the pilot project http://ResearchCompendia.org for linking data and codes to published results and validating findings; and the "Reproducible Research Standard" for ensuring the distribution of legally usable data and code. If time permits, I will present preliminary work on assessing the reproducibility of published computational findings based on the 2012 ICERM workshop on Reproducibility in Computational and Experimental Mathematics report [1]. Some of this research is described in my forthcoming co-edited books "Implementing Reproducible Research" and "Privacy, Big Data, and the Public Good."
[1] D. H. Bailey, J. M. Borwein, Victoria Stodden "Set the Default to 'Open'," Notices of the AMS, June/July 2013.
This PhD so far has focussed on two distinct optimisation problems pertaining to public transport, as detailed below:
Within public transit systems, so-called flexible transport systems have great potential to of- fer increases in mobility and convenience and decreases in travel times and operating costs. One such service is the Demand Responsive Connector, which transports commuters from residential ad- dresses to transit hubs via a shuttle service, from where they continue their journey via a traditional timetabled service. We investigate various options for implementing a demand responsive connector and the associated vehicle scheduling problems. Previous work has only considered regional systems, where vehicles drop passengers off at a predetermined station -- we relax that condition and investigate the benefits of allowing alternative transit stations. An extensive computational study shows that the more flexible system offers cost advantages over regional systems, especially when transit services are frequent, or transit hubs are close together, without little impact on customer convenience.
A compliment to public transport systems is that of ad hoc ride sharing, where participants (either offering or requesting rides) are paired with participants wanting the reverse, by some central service provider. Although such schemes are currently in operation, the lack of certainty offered to riders (i.e. the risk of not finding a match, or a driver not turning up) discourages potential users. Critically, this can prevent the system from reaching a "critical mass" and becoming self sustaining. We are investigating the situation where the provider has access to a fleet of dedicated drivers, and may use these to service riders, especially when such a system is in its infancy. We investigate some of the critical pricing issues surrounding this problem, present some optimisation models and provide some computational results.
We show that ESO universal Horn logic (existential second logic where the first order part is a universal Horn formula) is insufficient to capture P, the class of problems decidable in polynomial time. This is true in the presence of a successor relation in the input vocabulary. We provide two proofs -- one based on reduced products of two structures, and another based on approximability theory (the second proof is under the assumption that P is not the same as NP). The difference between the results here and those in (Graedel 1991) is due to the fact that the expressions this talk deals with are at the "structure level", whereas the expressions in (Graedel 1991) are at the "machine level" since they encode machine computations -- a case of "Easier DONE than SAID".
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
I will describe the research I have been doing with Fran Aragon and others, using graphical methods to study the properties of real numbers. There will be very few formulas and more pictures and movies.
: In this final talk of the sequence we will sketch Blinovsky's recent proof of the conjecture: Whenever n is at least 4k, and A is a set of n numbers with sum 0, then there are at least (n-1) choose (k-1) subsets of size k which have non-negative sum. The nice aspect of the proof is the combination of hypergraph concepts with convex geometry arguments and a Berry-Esseen inequality for approximating the hypergeometric distribution. The not so nice aspect (which will be omitted in the talk) is the amount of very tedious algebraic manipulation that is necessary to verify the required estimates. There are slides for all four MMS talks here.
The TELL ME agent based model will simulate personal protective decisions such as vaccination or hand hygiene during an influenza epidemic. Such behaviour may be adopted in response to communication from health authorities, taking into account perceived influenza risk. The behaviour decisions are to be modelled with a combination of personal attitude, average local attitude, the local number of influenza cases and the case fatality rate. The model is intended to be used to understand the effects of choices about how to communicate with citizens about protecting themselves from epidemics. I will discuss the TELL ME model design, the cognitive theory supporting the design and some of the expected problems in building the simulation.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
In this talk I will present a general method of finding simple groups acting on trees. This process, beginning with any group $G$ acting on a tree, produces more groups known as the $k$-closures of $G$. I will use several examples to highlight the versatility of this method, and I will discuss the properties of the $k$-closures that allow us to find abstractly simple groups.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
This is joint work with Geoffrey Lee.
The set of permutations generated by a passing an ordered sequence through a stack of depth 2 followed by an infinite stack in series was shown to be finitely based by Elder in 2005. In this new work we obtain an algebraic generating function for this class, by showing it is in bijection with an unambiguous context-free grammar.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
In these two talks chapter I want to talk, both generally and personally, about the use of tools in the practice of modern research mathematics. To focus my attention I have decided to discuss the way I and my research group members have used tools primarily computational (visual, numeric and symbolic) during the past five years. When the tools are relatively accessible I shall exhibit details; when they are less accessible I settle for illustrations and discussion of process.
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself).
Over the past five years, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities - and the growing ease of programming - of modern multi-core computing environments. But, at least as much, it has been driven by paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
The idea of an almost automorphisms of a tree will be introduced as well as what we are calling quasi-regular trees. I will then outline what I have been doing in regard to the almost automorphisms of almost quasi-regular trees with two valencies and the challenges that come with using more valencies.
In these two talks chapter I want to talk, both generally and personally, about the use of tools in the practice of modern research mathematics. To focus my attention I have decided to discuss the way I and my research group members have used tools primarily computational (visual, numeric and symbolic) during the past five years. When the tools are relatively accessible I shall exhibit details; when they are less accessible I settle for illustrations and discussion of process.
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself).
Over the past five years, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities - and the growing ease of programming - of modern multi-core computing environments. But, at least as much, it has been driven by paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
The American mathematical research community experienced remarkable changes over the course of the three decades from 1920 to 1950. The first ten years witnessed the "corporatization" and "capitalization" of the American Mathematical Society, as mathematicians like Oswald Veblen and George Birkhoff worked to raise private, governmental, and foundation monies in support of research-level mathematics. The next decade, marked by the stock market crash and Depression, almost paradoxically witnessed the formation and building up of a number of strongly research-oriented departments across the nation at the same time that noted mathematical refugees were fleeing the ever-worsening political situation in Europe. Finally, the 1940s saw the mobilization of American research mathematicians in the war effort and their subsequent efforts to insure that pure mathematical research was supported as the Federal government began to open its coffers in the immediately postwar period. Ultimately, the story to be told here is a success story, but one of success in the face of many obstacles. At numerous points along the way, things could have turned out dramatically differently. This talk will explore those historical contingencies.
About the speaker:
Karen Parshall is Professor of History and Mathematics at the University of Virginia, where she has served on the faculty since 1988. Her research focuses primarily on the history of science and mathematics in America and in the history of 19th- and 20th-century algebra. In addition to exploring technical developments of algebra—the theory of algebras, group theory, algebraic invariant theory—she has worked on more thematic issues such as the development of national mathematical research communities (specifically in the United States and Great Britain) and the internationalization of mathematics in the nineteenth and twentieth centuries. Her most recent book (co-authored with Victor Katz), Taming the Unknown: A History of Algebra from Antiquity to the Early Twentieth Century, will be published by Princeton University Press in June 2014.
This talk is a practice talk for an invited talk I will soon give in Indonesia, in which I was asked to present on Education at a conference on Graph Theory.
In 1929 Alfred North Whitehead wrote: "The university imparts information, but it imparts it imaginatively. At least, this is the function it should perform for society. A university which fails in this respect has no reason for existence. This atmosphere of excitement, arising from imaginative consideration, transforms knowledge. A fact is no longer a bare fact: it is invested with all its possibilities."
In the light and inspiration of Whitehead's quote, I will discuss some aspects of the problem and challenge of mathematical education as we meet it in Universities today, with reference to some of the ways that combinatorics may be an ideal vehicle for sharing authentic mathematical experiences with diverse students.
We begin the talk with the story of Dido and the Brachistochrone problem. We show how these two problems leads to the two must fundamental problems of the calculus of variations. The Brachistochrone problem leads to the basic problem of calculus of variations and that leads to the Euler-Lagrange equation. We show the link between the Euler-Lagrange equations and the laws of classical mechanics.
We also discuss about the Legendre conditions and Jacobi conjugate points which leads to the sufficient conditions for weak local minimum points
.The Dido's problem leads to the problem of Lagrange in which Lagrange introduces his multiplier rule. We also speak a bit about the problem of Bolza and further also discuss about how the class of extremals can be enlarged and the issue of existence of solutions in calculus of variations, the Tonelli's direct methods and some more facts on the quest for multiplier rules.
Using gap functions to devise error bounds for some special classes of monotone variational inequality is a fruitful venture since it allows us to obtain error bounds for certain classes of convex optimization problem. It is to be noted that if we take a Hoffman type approach to obtain error bounds to the solution set of a convex programming problem it does not turn out to be fruitful and thus using the vehicle of variational inequality seems fundamental in this case. We begin the discussion by introducing several popular gap functions for variational inequalities like the Auslender gap function and the Fukushima's regularized gap function and how error bounds can be created out of them. We then also spent a brief time with gap functions for variational inequalities with set-valued maps which correspond to the non-smooth convex optimization problems. We then quickly shift our focus on the creating error bounds using the dual gap function which is possibly the only convex gap function known in the literature to the best of our knowledge. In fact this gap function was never used for creating error bounds. Error bounds can be used as stopping criteria and this the dual gap function can be used to solve the variational inequality and also be used to develop a stopping criteria. We present several recent research on error bounds using the dual gap function and also provide an application to quasiconvex optimization.
I will solve a variety of mathematical problems in Maple. These will come from geometry, number theory, analysis and discrete mathematics.
A companion book chapter is http://carma.newcastle.edu.au/jon/hhm.pdf.
The need for well-trained secondary mathematics teachers is well documented. In this talk we will discuss strategies we have developed at JCU to address the quality of graduating mathematics teachers. These strategies are broadly grouped as (i) having students develop a sense of how they learn mathematics and the skills they can work on to improve their learning of mathematics, and (ii) the need for specific mathematics content subjects for pre-service secondary mathematics teachers.
Our aim in this talk is to show that D-gap function can play a pivotal role in developing inexact descent methods to solve monotone variational inequality problem where the feasible set of the variational inequality is a closed convex set rather than just the non-negative orthant. We also focus on the issue of regularization of variational inequality. Freidlander and Tseng has shown in 2007 that by the regularizing the convex objective function by using another convex function which in practice is chosen correctly can make the solution of the problem simpler. Tseng and Freiedlander has provided a criteria for exact regularization of convex optimization problems. In this section we ask the question as to what extent one can extend the idea of exact regularization in the context of variational inequalities. We study this in this talk and we show the central role played by the dual gap function in this analysis.
In this talk we are going to discuss the importance of M-stationary conditions for a special class of one-stage stochastic mathematical programming problem with complementarity constraints (SMPCC, for short). M-stationarity concept is well known for deterministic MPCC problems. Now using the results of deterministic MPCC problems we can easily derive the M-stationarity for SMPCC problems under some well known constraint qualifications. It is well observed that under MPCC-linear independence constraint qualification we obtain strong stationarity conditions at a local minimum, which is a stronger notion than M-stationarity. Same result cab be derived for SMPCC problems under SMPCC-LICQ. Then the question that will arise is: What is the importance to study M-stationarity under the assumption of SMPCC-LICQ. To answer this question we have to discuss sample average approximation (SAA) method, which is a common technique to solve stochastic optimization problems. Here one has to discretize the underlying probability space and then using the strong Law of Large Numbers one has to approximate the expectation functionals. Now the main result of this discussion as follows: If we consider a sequence of M-type Fritz John points of the SAA problems then any accumulation point of this sequence will be an M-stationarity point under SMPCC-LICQ. But this kind of result, in general, does not hold for strong stationarity conditions.
It is axiomatic in mathematics research that all steps of an argument or proof are open to scrutiny. However, a proof based even in part on commercial software is hard to assess, because the source code---and sometimes even the algorithm used---may not be made available. There is the further problem that a reader of the proof may not be able to verify the author's work unless the reader has access to the same software.
For this reason open-source software systems have always enjoyed some use by mathematicians, but only recently have systems of sufficient power and depth become available which can compete with---and in some cases even surpass---commercial systems.
Most mathematicians and mathematics educators seem to gravitate to commercial systems partly because such systems are better marketed, but also in the view that they may enjoy some level of support. But this comes at the cost of initial purchase, plus annual licensing fees. The current state of tertiary funding in Australia means that for all but the very top tier of universities, the expense of such systems is harder to justify.
For educators, a problem is making the system available to students: it is known that students get the most use from a system when they have unrestricted access to it: at home as well as at their institution. Again, the use of an open-source system makes it trivial to provide access.
This talk aims to introduce several very powerful and mature systems: the computer algebra systems Sage, Maxima and Axiom; the numerical systems Octave and Scilab; and the assessment system WeBWorK (or as many of those as time permits). We will briefly describe these systems: their history, current status, usage, and comparison with commercial systems. We will also indicate ways in which anybody can be involved in their development. The presenter will describe his own experiences in using these software systems, and his students' attitudes to them.
Depending on audience interests and expertise, the talk might include looking at a couple of applications: geometry and Gr\"obner bases, derivation of Runge-Kutta explicit formulas, cryptography involving elliptic curves and finite fields, or digital image processing.
The talk will not assume any particular mathematics beyond undergraduate material or material with which the audience is comfortable, and will be as polemical as the audience allows.
The additive or linearized polynomials were introduced by Ore in 1933 as an analogy over finite fields to his theory of difference and difference equations over function fields. The additive polynomials over a finite field $F=GF(q)$, where $q=p^e$ for some prime $p$, are those of the form
$f = f_0 x + f_1 x^p + f_2 x^{p^2} + ... + f_m x^{p^m}$ in $F[x]$
They form a non-commutative left-euclidean principal ideal domain under the usual addition and functional composition, and possess a rich structure in both their decomposition structures and root geometries. Additive polynomials have been employed in number theory and algebraic geometry, and applied to constructing error-correcting codes and cryptographic protocols. In this talk we will present fast algorithms for decomposing and factoring additive polynomials, and also for counting the number of decompositions with particular degree sequences.
Algebraically, we show how to reduce the problem of decomposing additive polynomials to decomposing a related associative algebra, the eigenring. We give computationally efficient versions of the Jordan-Holder and Krull-Schmidt theorems in this context to describe all possible factorization. Geometrically, we show how to compute a representation of the Frobenius operator on the space of roots, and show how its Jordan form can be used to count the number of decompositions. We also describe an inverse theory, from which we can construct and count the number of additive polynomials with specified factorization patterns.
Some of this is joint work with Joachim von zur Gathen (Bonn) and Konstantin-Ziegler (Bonn).
I am refereeing a manuscript in which a new construction for producing graphs from a group is given. There are some surprising aspects of this new method and that is what I shall discuss.
The Australian Mathematical Sciences Student Conference is held annually for Australian postgraduate and honours students of any mathematical science. The conference brings students together, gives an opportunity for presentation of work, facilitates dialogue, and encourages collaboration, within a friendly and informal atmosphere.
Visit the conference website for more details.
I will survey my career both mathematically and personally offering advice and opinions, which should probably be taken with so many grains of salt that it makes you nauseous. (Note: Please bring with you a sense of humour and all of your preconceived notions of how your life will turn out. It will be more fun for everyone that way.)
What do the three elements of the title have in common is the utility of using graph searching as a model. In this talk I shall discuss the relatively brief history of graph searching, several models currently being employed, several significant results, unsolved conjectures, and the vast expanse of unexplored territory.
I will talk about the geometric properties of conic problems and their interplay with ill-posedness and the performance of numerical methods. This includes some new results on the facial structure of general convex cones, preconditioning of feasibility problems and characterisations of ill-posed systems.
Many biological environments, both intracellular and extracellular, are often crowded by large molecules or inert objects which can impede the motion of cells and molecules. It is therefore essential for us to develop appropriate mathematical tools which can reliably predict and quantify collective motion through crowded environments.
Transport through crowded environments is often classified as anomalous, rather than classical, Fickian diffusion. Over the last 30 years many studies have sought to describe such transport processes using either a continuous time random walk or fractional order differential equation. For both these models the transport is characterized by a parameter $\alpha$, where $\alpha=1$ is associated with Fickian diffusion and $\alpha<1$ is associated with anomalous subdiffusion. In this presentation we will consider the motion of a single agent migrating through a crowded environment that is populated by impenetrable, immobile obstacles and we estimate $\alpha$ using mean squared displacement data. These results will be compared with computer simulations mimicking the transport of a population of such agents through a similar crowded environment and we match averaged agent density profiles to the solution of a related fractional order differential equation to obtain an alternative estimate of $\alpha$. I will examine the relationship between our estimate of $\alpha$ and the properties of the obstacle field for both a single agent and a population of agents; in both cases $\alpha$ decreases as the obstacle density increases, and that the rate of decrease is greater for smaller obstacles. These very simple computer simulations suggests that it may be inappropriate to model transport through a crowded environment using widely reported approaches including power laws to describe the mean squared displacement and fractional order differential equations to represent the averaged agent density profiles.
More details can be found in Ellery, Simpson, McCue and Baker (2014) The Journal of Chemical Physics, 140, 054108.
The talk will provide a brief overview of the findings of two completed research projects and one ongoing project related to the knowledge and beliefs of teachers of school mathematics. It will consider some existing frameworks for types of teacher knowledge, and the place of teachers’ beliefs and confidence in relation to these, as well as touching on how a broad construct of teacher knowledge might develop.
We shall finish our look at two-sided group graphs.
The relentless advance of computer technology, a gift of Moore's Law, and the data deluge available via the Internet and other sources, has been a gift to both scientific research and business/industry. Researchers in many fields are hard at work exploiting this data. The discipline of "machine learning," for instance, attempts to automatically classify, interpret and find patterns in big data. It has applications as diverse as supernova astronomy, protein molecule analysis, cybersecurity, medicine and finance. However, with this opportunity comes the danger of "statistical overfitting," namely attempting to find patterns in data beyond prudent limits, thus producing results that are statistically meaningless.
The problem of statistical overfitting has recently been highlighted in mathematical finance. A just-published paper by the present author, Jonathan Borwein, Marcos Lopez de Prado and Jim Zhu, entitled "Pseudo-Mathematics and Financial Charlatanism," draws into question the present practice of using historical stock market data to "backtest" a new proposed investment strategy or exchange-traded fund. We demonstrate that in fact it is very easy to overfit stock market data, given powerful computer technology available, and, further, without disclosure of how many variations were tried in the design of a proposed investment strategy, it is impossible for potential investors to know if the strategy has been overfit. Hence, many published backtests are probably invalid, and this may explain why so many proposed investment strategies, which look great on paper, later fall flat when actually deployed.
In general, we argue that not only do those who directly deal with "big data" need to be better aware of the methodological and statistical pitfalls of analyzing this data, but those who observe these problems of this sort arising in their profession need to be more vocal about them. Otherwise, to quote our "Pseudo-Mathematics" paper, "Our silence is consent, making us accomplices in these abuses."
(see PDF)
Lagrange multiplier method is fundamental in dealing with constrained optimization problems and is also related to many other important results.
In these two talks we first survey several different ideas in proving the Lagrange multiplier rule and then concentrate on the variational approach.
We will first discuss the idea, a variational proof the Lagrange multiplier rule in the convex case and then consider the general case and relationship with other results.
These talks are continuation of the e-mail discussions with Professor Jon Borwein and are very informal.
Reproducibility is emerging as a major issue for highly parallel computing, in much the same way (and for many of the same reasons) that it is emerging as an issue in other fields of science, technology and medicine, namely the growing numbers of cases where other researchers cannot reproduce published results. This talk will summarize a number of these issues, including the need to carefully document computational experiments, the growing concern over numerical reproducibility and, once again, the need for responsible reporting of performance. Have we learned the lessons of history?
My talk will be on the projection/reflection methods and the application of tools from convex and variational analysis to optimisation problems, and I will talk about my thesis problem which focuses on the following:
We consider convexity conditions ensuring the monotonicity of right and left Riemann sums of a function $f:[0,1]\rightarrow \mathbb{R}$; applying our results in particular to functions such as
$f(x) =1/\left(1+x^2\right)$.Lagrange multiplier method is fundamental in dealing with constrained optimization problems and is also related to many other important results.
In these two talks we first survey several different ideas in proving the Lagrange multiplier rule and then concentrate on the variational approach.
We will first discuss the idea, a variational proof the Lagrange multiplier rule in the convex case and then consider the general case and relationship with other results.
These talks are continuation of the e-mail discussions with Professor Jon Borwein and are very informal.
Usually, when we want to study permutation groups, we look first at the primitive permutation groups (transitive groups in which point stabilizers are maximal); in the finite case these groups are the basic building blocks from which all finite permutation groups are comprised. Thanks to the seminal O'Nan—Scott Theorem and the Classification of the Finite Simple Groups, the structure of finite primitive permutation groups is broadly known.
In this talk I'll describe a new theorem of mine which extends the O'Nan—Scott Theorem to a classification of all primitive permutation groups with finite point stabilizers. This theorem describes the structure of these groups in terms of finitely generated simple groups.
The eighth edition of the conference series GAGTA (Geometric and Asymptotic Group Theory with Applications) will be held in Newcastle, Australia July 21-25 (Mon-Fri) 2014.
GAGTA conferences are devoted to the study of a variety of areas in geometric and combinatorial group theory, including asymptotic and probabilistic methods, as well as algorithmic and computational topics involving groups. In particular, areas of interest include group actions, isoperimetric functions, growth, asymptotic invariants, random walks, algebraic geometry over groups, algorithmic problems and their complexity, generic properties and generic complexity, and applications to non-commutative cryptography.
Visit the conference web sitefor more information.
A vast amount of natural processes can be modelled by partial differential equations involving diffusion operators. The Navier-Stokes equations of fluid dynamics is one of the most popular of such models, but many other equations describing flows involve diffusion processes. These equations are often non-linear and coupled, and theoretical analysis can only provided limited information on the qualitative behaviours of their solutions. Numerical analysis is then used to obtain a prediction of the fluid's behaviour.
In many circumstances, the numerical methods used to approximate the models must satisfy engineering or computational constraints. For examples, in underground flows in porous media (involved in oil recovery, carbon storage or hydrogeology), the diffusions properties of the medium vary a lot between geological layers, and can be strongly skewed in one direction. Moreover, the available meshes used to discretise the equations may be quite irregular. The sheer size of the domain of study (a few kilometres wide) also calls for methods that can be easily parallelised and give good and stable results on relatively large grids. These constraints make the construction and study of numerical methods for diffusion models very challenging.
In the first part of this talk, I will present some numerical schemes, developed in the last 10 years and designed to discretise diffusion equations as encountered in reservoir engineering, with all the associated constraints. In the second part, I will focus on mathematical tools and techniques constructed to analyse the convergence of numerical schemes under realistic hypotheses (i.e. without assuming non-physical smoothness on the data or the solutions). These techniques are based on the adaptation to the discrete setting of functional analysis results used to study the continuous equations.
Colin Reid will present some thoughts on limits of contraction groups.
I shall be describing a largely unexplored concept in graph theory which is, I believe, an ideal thesis topic. I shall be presenting this at the CIMPA workshop in Laos in December.
Mathematics can often seen almost too good to be true. This sense that mathematics is marvellous enlivens learning and stimulates research but we tend to let remarkable things pass without remark after we become familiar with them. The miracles of Pythagorean triples and eigenvalues will be highlights of this talk.
The talk will include some ideas of what could be blending into our teaching program.
We give some background to the metric basis problem (or resolving set) of a graph. We discuss various resolving sets with different conditions forced on them. We mainly stress the ideas of strong metric basis and partition dimension of graphs. We give the necessary literature background on these concepts and some preliminary results. We present our new results obtained so far as part of the research during my candidature. We also list the research problems I propose to study during the remainder of my PhD candidature and we present a tentative timeline of my research activities.
This week I shall start a series of talks on basic pursuit-evasion in graphs (frequently called cops and robber in the literature). We shall do some topological graph theory leading to an intriguing conjecture, and we'll look at a characterization problem.
The Diophantine Problem in group theory can be stated as: is it algorithmically decidable whether an equation whose coefficients are elements of a given group has at least one solution in that group?
The talk will be a survey on this topic, with emphasis on what is known about solving equations in free groups. I will also present some of the algebraic geometry over groups developed in the last 20 years, and the connections to logic and geometry. I will conclude with results concerning the asymptotic behavior of satisfiable homogeneous equations in surface groups.
Jon Borwein will discuss CARMA's new "Risk and finance study group". Please come and learn about the opportunities. See also http://www.financial-math.org/ and http://www.financial-math.org/blog/.
This week I shall continue the discussion of searching graphs.
We present a PSPACE-algorithm to compute a finite graph of exponential size that describes the set of all solutions of equations in free groups with rational constraints. This result became possible due to the recently invented recompression technique of Artur Jez. We show that it is decidable in PSPACE whenever the set of all solutions is finite. If the set of all solutions is finite, then the length of a longest solution is at most doubly exponential.
This talk is based on a joint paper with Artur Jez and Wojciech Plandowski (arXiv:1405.5133 and LNCS 2014, Proceedings CSR 2014, Moscow, June 7 -- 11, 2014).
Ben will attempt to articulate what he has been meaning to work on. That is, choosing representatives with smallest 1-norm in an effort to find a nice bound on the number of vertices on level 1 of the corresponding rooted almost quasi-regular tree with 1 defect, and other ideas on choosing good representatives.
Brian Alspach will continue discussing searching graphs embedded on the torus.
The restricted product over $X$ of copies of the $p$-adic numbers $\mathbb{Q}_p$, denoted $\mathbb{Q}_p(X)$, is self-dual and is the natural $p$-adic analogue of Hilbert space. The additive group of this space is locally compact and the continuous endomorphisms of the group are precisely the continuous linear operators on $\mathbb{Q}_p(X)$.
Attempts to develop a spectral theory for continuous linear operators on $\mathbb{Q}_p(X)$ will be described at an elementary level. The Berkovich spectral theory over non-Archimedean fields will be summarised and the spectrum of the linear operator $T$ compared with the scale of $T$ as an endomorphism of $(\mathbb{Q}_p(X),+)$.
The original motivation for this work, which is joint with Andreas Thom (Leipzig), will also be briefly discussed. A certain result that holds for representations of any group on a Hilbert space, proved by operator theoretic methods, can only be proved for representations of sofic groups on $\mathbb{Q}_p(X)$ and it is thought that the difficulty might lie with the lack of understanding of linear operators on $\mathbb{Q}_p(X)$ rather than with non-sofic groups.
This forum is a follow-on from the seminar that Professor Willis gave three weeks prior, on maths that seems too good to be true; and his ideas for incorporating the surprisingly and enlivening into what and how we teach: he gave as exemplars the miracles of Pythagoreans triples and eigenvalues. A question raised in the discussion at that seminar was if/how might we use assessment to encourage the kinds of learning we would like. This forum will be an opportunity to further that conversation.
Jeff, Andrew and Massoud have each kindly agreed to give us 5 minute presentations relating to the latter year maths courses that they have recently been teaching, to get our forum started. Jeff may speak on his developments in his new course on Fourier methods, Andrew will talk about some of the innovations that were introduced into Topology in the last few offerings which he has been using and further developing, and Massoud has a range of OR courses he might speak about.
Everyone is encouraged to share examples of their own practice or ideas that they have that may be of interest to others.
A locating-total dominating set (LTDS) in a connected graph G is a total dominating set $S$ of $G$ such that for every two vertices $u$ and $v$ in $V(G)-S$, $N(u) \cap S \neq N(v) \cap S$. Determining the minimum cardinality of a locating-total dominating set is called as the locating-total dominating problem and it is denoted as $\gamma_t^l (G)$. We have improved the lower bound obtained by M.A.Henning and N.D.Rad [1]. We have also proved that the bound obtained is sharp for some special families of regular graphs.
[1] M. A. Henning and N. J. Rad, Locating-total dominations in graphs, Discrete Applied Mathematics, 160(2012), 1986-1993.
8:30 am | Registration, coffee and light breakfast |
9:00 am | Director's Welcome |
9:30 am | Session: "Research at CARMA" |
10:30 am | Morning tea |
11:00 am | Session: "Academic Liasing" |
11:30 am | Session: "Education/Outreach Activities" |
12:30 pm | Lunch |
2:00 pm | Session: "Future of Research at the University" |
2:30 pm | Session: "Future Planning for CARMA" |
3:30 pm | Afternoon tea |
4:00 pm | Session: Talks by members (to 5:20 pm) |
6:00 pm | Dinner |
In this talk we consider economic Model Predictive Control (MPC) schemes. "Economic" means that the MPC stage cost models economic considerations (like maximal yield, minimal energy consumption...) rather than merely penalizing the distance to a pre-computed steady state or reference trajectory. In order to keep implementation and design simple, we consider schemes without terminal constraints and costs.
In the first (longer) part of the talk, we summarize recent results on the performance and stability properties of such schemes for nonlinear discrete time systems. Particularly, we present conditions under which one can guarantee practical asymptotic stability of the optimal steady state as well as approximately optimal averaged and transient performance. Here, dissipativity of the underlying optimal control problems and the turnpike property are shown to play an important role (this part is based on joint work with Tobias Damm, Marleen Stieler and Karl Worthmann).
In the second (shorter) part of the talk we present an application of an economic MPC scheme to a Smart Grid control problem (based on joint work with Philipp Braun, Christopher Kellett, Steven Weller and Karl Worthmann). While economic MPC shows good results for this control problem in numerical simulations, several aspects of this application are not covered by the available theory. This is explained in the last part of the talk, along with some suggestions on how to overcome this gap.
Classical umbral calculus was introduced by Blissard in the 1860's and later studied by E. T. Bell and Rota. It is a symbolic computation method that is particularly efficient for proving identities involving elementary special functions such as Bernoulli or Hermite polynomials. I will show the link between this technique and moment representation, and provide examples of its application.
This is the first in a series of lectures on this fascinating group.
If you’re enrolled in a BMath or Combined Maths degree or have Maths or Stats as a co-major, you’re invited to the B Math Party.
Come along for free food and soft drinks, meet fellow students and talk to staff about courses. Discover opportunities for summer research, Honours, Higher Degrees and scholarships.
The topological and measure structures carried by locally compact groups make them precisely the class of groups to which the methods of harmonic analysis extend. These methods involve study of spaces of real- or complex-valued functions on the group and general theorems from topology guarantee that these spaces are sufficiently large. When analysing particular groups however, particular functions deriving from the structure of the group are at hand. The identity function in the cases of $(\mathbb{R},+)$ and $(\mathbb{Z},+)$ are the most obvious examples, and coordinate functions on matrix groups and growth functions on finitely generated discrete groups are only slightly less obvious.
In the case of totally disconnected groups, compact open subgroups are essential structural features that give rise to positive integer-valued functions on the group. The set of values of $p$ for which the reciprocals of these functions belong to $L^p$ is related to the structure of the group and, when they do, the $L^p$-norm is a type of $\zeta$-function of $p$. This is joint work with Thomas Weigel of Milan.
This Thursday, sees a return to graph searching in the discrete mathematics instructional seminar. I’ll be looking at characterization results.
More than 120 years after their introduction, Lyapunov's so-called First and Second Methods remain the most widely used tools for stability analysis of nonlinear systems. Loosely speaking, the Second Method states that if one can find an appropriate Lyapunov function then the system has some stability property. A particular strength of this approach is that one need not know solutions of the system in order to make definitive statements about stability properties. The main drawback of the Second Method is the need to find a Lyapunov function, which is frequently a difficult task.
Converse Lyapunov Theorems answer the question: given a particular stability property, can one always (in principle) find an appropriate Lyapunov function? In the first installment of this two-part talk, we will survey the history of the field and describe several such Converse Lyapunov Theorems for both continuous and discrete-time systems. In the second instalment we will discuss constructive techniques for numerically computing Lyapunov functions.
In 1976, Ribe showed that if two Banach spaces are uniformly homeomorphic, then their finite dimensional subspaces are similar in some sense. This suggests that properties of Banach spaces which depend only on finitely many vectors should have a purely metric characterization. We will shortly discuss the history of the Ribe program, as well as some recent developments.
In particular:
It is known that the function s defined on an ordering of the 4^m monomial basis matrices of the real representation of the Clifford algebra Cl(m, m), where s(A) = 0 if A is symmetric, s(A) = 1 if A is skew, is a bent function. It is perhaps less well known that the function t, where t(A) = 0 if A is diagonal or skew, t(A) = 1 otherwise, is also a bent function, with the same parameters as s. The talk will describe these functions and their relation to Hadamard difference sets and strongly regular graphs.
The talk was originally presented at ADTHM 2014 in Lethbridge this year.
I will survey some recent and not-so-recent results surrounding the areas of Diophantine approximation and Mahler's method related to variations of the Chomsky-Schützenberger hierarchy.
Third lecture: metric properties.
More than 120 years after their introduction, Lyapunov's so-called First and Second Methods remain the most widely used tools for stability analysis of nonlinear systems. Loosely speaking, the Second Method states that if one can find an appropriate Lyapunov function then the system has some stability property. A particular strength of this approach is that one need not know solutions of the system in order to make definitive statements about stability properties. The main drawback of the Second Method is the need to find a Lyapunov function, which is frequently a difficult task.
Converse Lyapunov Theorems answer the question: given a particular stability property, can one always (in principle) find an appropriate Lyapunov function? In the first installment of this two-part talk, we will survey the history of the field and describe several such Converse Lyapunov Theorems for both continuous and discrete-time systems. In the second instalment we will discuss constructive techniques for numerically computing Lyapunov functions.
This week I shall finish my discussion about searching graphs by looking at the recent paper by Clarke and MacGillavray that characterizes graphs that are k-searchable.
Optimization problems involving polynomial functions are of great importance in applied mathematics and engineering, and they are intrinsically hard problems. They arise in important engineering applications such as the sensor network localization problem, and provide a rich and fruitful interaction between algebraic-geometric concepts and modern convex programming (semi-definite programming). In this talk, we will discuss some recent progress of the polynomial (semi-algebraic) optimization with a focus on the intrinsic link between the polynomial structure and the hidden convexity structure. The talk will be divided into two parts. In the first part, we will describe the key results in this new area, highlighting the geometric and conceptual aspects as well as recent work on global optimality theory, algorithms and applications. In the second part, we will explain how the semi-algebraic structure helps us to analyze some important and classical algorithms in optimization such as alternating projection algorithm, proximal point algorithm and Douglas-Rachford algorithm (if time is permitted).
One of the key components in the earth’s climate is the formation and melting of sea ice. Currently, we struggle to model correctly this process. One possible explanation for this shortcoming is that ocean waves play a key role and that their effect needs to be include in climate models. I will describe a series of recent experiments which seem to validate this hypothesis and discuss attempts my myself and others to model wave-ice interaction.
We introduce a subfamily of enlargements of a maximally monotone operator $T$. Our definition is inspired by a 1988 publication of Fitzpatrick. These enlargements are elements of the family of enlargements $\mathbb{E}(T)$ introduced by Svaiter in 2000. These new enlargements share with the $\epsilon$-subdifferential a special additivity property, and hence they can be seen as structurally closer to the $\epsilon$-subdifferential. For the case $T=\nabla f$, we prove that some members of the subfamily are smaller than the $\epsilon$-subdifferential enlargement. In this case, we construct a specific enlargement which coincides with the $\epsilon$-subdifferential.
Joint work with Juan Enrique Martínez Legaz, Mahboubeh Rezaei, and Michel Théra.
We discuss the genesis of symbolic computation, its deployment into computer algebra systems, and the applications of these systems in the modern era.
We will pay special attention to polynomial system solvers and highlight the problems that arise when considering non-linear problems. For instance, forgetting about actually solving, how does one even represent infinite solution sets?
The completion with respect to the degree valuation of the field of rational functions over a finite field is often a fruitful analogue to consider when one would like to test ideas, methods and conjectures in Diophantine approximation for the real numbers. In many respects, this setting behaves very similarly to the real numbers, and in particular the metric theory of Diophantine approximation in this setup is well-developed and and in some respects more is known to be true in this setup than in the real numbers. However, natural analogues of other classical theorems in Diophantine approximation fail spectacularly in positive characteristic. In this talk, I will introduce the topic and give old and new results underpinning the similarities and differences of the theories of Diophantine approximation in positive characteristic and in characteristic zero.
Self-avoiding walks are a widely studied model of polymers, which are defined as walks on a lattice where each successive step visits a neighbouring site, provided the site has not already been visited. Despite the apparent simplicity of the model, it has been of much interest to statistical mechanicians and probabilists for over 60 years, and many important questions about it remain open.
One of the most powerful methods to study self-avoiding walks is Monte Carlo simulation. I'll give an overview of the historical developments in this field, and will explain what ingredients are needed for a good Monte Carlo algorithm. I'll then describe how recent progress has allowed for the efficient simulation of truly long walks with many millions of steps. Finally, I'll discuss whether lessons we've learned from simulating self-avoiding walks may be applicable to a wide range of Markov chain Monte Carlo simulations.
We first introduce the notations of pattern sequence, which is defined by the number of (possibly) overlapping occurrences of a given word in the $\langle q,r\rangle$-numeration system. After surveying several properties of pattern sequence, we will give necessary and sufficient criteria for the algebraic independence of their generating functions. As applications, we deduce the linear relations between pattern sequences.
The proofs of the theorem and the corollaries are based on Mahler's method.
Mixed Littlewood conjecture proposed by de Mathan and Teulie in 2004 states that for every real number $x$ one has $\liminf q * |q|_D * ||qx|| = 0,$ where $|q|_D$ is a so called pseudo norm which generalises the standard p-adic norm. In the talk we'll consider the set mad of potential counterexamples to this conjecture. Thanks to the results of Einsiedler and Kleinbock we already know that the Haudorff dimension of mad is zero, so this set is very tiny. During the talk we'll see that the continued fraction expansion of every element in mad should satisfy some quite restrictive conditions. As one of them we'll see that for these expansions, considered as infinite words, the complexity function can neither grow too fast nor too slow.
Tensor trains are a new class of functions which are thought to have some potential to deal with high-dimensional problems. While connected with algebraic geometry the main concepts used are rank-k matrix factorisations. In this talk I will review some basic properties of tensor trains. In particular I will consider algorithms for the solution of linear systems Ax=0. This talk is related to research in progress with Jochen Garcke (Uni Bonn and Fraunhofer Institute) on the solution of the chemical master equation. This talk assumes a basic background in matrix algebra. No background in algebraic geometry is required.
Supervisor: Thomas Kalinowski
Supervisor: Thomas Kalinowski
Supervisor: Brailey Sims
Supervisor: Brian Alspach
Multi-objective optimisation is one of the earliest fields of study in operations research. In fact, Francis Edgeworth (1845--1926) and Vilfredo Pareto (1848--1923) laid the foundations of this field of study over one hundred years ago. Many real world-problems involve multiple objectives. Due to conflict between objectives, finding a feasible solution that simultaneously optimises all objectives is usually impossible. Consequently, in practice, decision makers want to understand the trade off between objectives before choosing suitable solution. Thus, generating many or all efficient solutions, i.e., solutions in which it is impossible to improve the value of one objective without a deterioration in the value of at least one other objective, is the primary goal in multi-objective optimisation. In this talk, I will focus on Multi-objective Integer Programs (MOIPs) and explain briefly some new efficient algorithms that I have developed since starting my PhD to solve MOIPs. I also explain some links between the ideas of multi-objective integer programming and other fields of study such as game theory.
The Mathematical Sciences Institute will host a three day workshop on more effective use of visualization in mathematics, physics, and statistics, from the perspectives of education, research and outreach. This is the second EViMS meeting, following the highly successful one held in Newcastle in November 2012. Our aim for the workshop is to help mathematical scientists understand the opportunities, risks and benefits of visualization, in research and education, in a world where visual content and new methods are becoming ubiquitous.
Visit the conference website for more information.
(Groups & Dynamics Special Session)
(Maths Education Special Session)
(Operator Algebra/ Functional Analysis Special Session)
(Computational Mathematics Special Session)
In this seminar I will talk on decomposing sequences into maximal palindrome factors and its applications in hairpin analysis of viruses like HIV or TB.
We apply the piecewise constant, discontinuous Galerkin method to discretize a fractional diffusion equation with respect to time. Using Laplace transform techniques, we show that the method is first order accurate at the $n$th time level~$t_n$, but the error bound includes a factor~$t_n^{-1}$ if we assume no smoothness of the initial data. We also show that for smoother initial data the growth in the error bound for decreasing time is milder, and in some cases absent altogether. Our error bounds generalize known results for the classical heat equation and are illustrated using a model 1D problem.
The AMSI Summer School is an exciting opportunity for mathematical sciences students from around Australia to come together over the summer break to develop their skills and networks. Details are available from the 2015 AMSI Summer School website.
Also see the CARMA events page for details of some Summer School seminars, open to all!
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself). [[L], p. 53]
Over the past decade, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities and the growing ease of programming of modern multi-core computing environments [BSC]. But, at least as much, it has been driven by my groups paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
I shall describe diverse work from my group in transcendental number theory (normality of real numbers [AB3]), in dynamic geometry (iterative reflection methods [AB]), probability (behaviour of short random walks [BS, BSWZ]), and matrix completion problems (especially, applied to protein conformation [ABT]). While all of this involved significant numerical-symbolic computation, I shall focus on the visual and experimental components.
AB F. Aragon and J.M. Borwein, ``Global convergence of a non-convex Douglas-Rachford iteration.’’ J. Global Optimization. 57(3) (2013), 753{769. DOI 10.1007/s10898-012-9958-4.
AB3 F. Aragon, D. H. Bailey, J.M. Borwein and P.B. Borwein, Walking on real numbers." Mathematical Intelligencer. 35(1) (2013), 42{60. See also http://walks.carma.newcastle.edu.au/.
ABT F. Aragon, J. M.Borwein, and M. Tam, ``Douglas-Rachford feasibility methods for matrix completion problems.’’ ANZIAM Journal. Galleys June 2014. See also http://carma.newcastle.edu.au/DRmethods/.
BSC J.M. Borwein, M. Skerritt and C. Maitland, ``Computation of a lower bound to Giuga's primality conjecture.’’ Integers 13 (2013). Online Sept 2013 at #A67, http://www.westga.edu/~integers/cgi-bin/get.cgi.
BS J.M. Borwein and A. Straub, ``Mahler measures, short walks and logsine integrals.’’ Theoretical Computer Science. Special issue on Symbolic and Numeric Computation. 479 (1) (2013), 4-21. DOI: http://link.springer.com/article/10.1016/j.tcs.2012.10.025.
BSWZ J.M. Borwein, A. Straub, J. Wan and W. Zudilin (with an Appendix by Don Zagier), ``Densities of short uniform random walks.’’ Can. J. Math. 64 (5), (2012), 961-990. http://dx.doi.org/10.4153/CJM-2011-079-2.
L J.E. Littlewood, A mathematician's miscellany, London: Methuen (1953); Littlewood, J. E. and Bollobas, Bela, ed., Littlewood’s miscellany, Cambridge University Press, 1986.
This talk will highlight links between topics studied in undergraduate mathematics on one hand and frontiers of current research in analysis and symmetry on the other. The approach will be semi-historical and will aim to give an impression of what the research is about.
Fundamental ideas in calculus, such as continuity, differentiation and integration, are first encountered in the setting of functions on the real line. In addition to topological properties of the line, the algebraic properties of being a group and a field, that the set of real numbers possesses, are also important. These properties express symmetries of the set of real numbers, and it turns out that this combination of calculus, algebra and symmetry extends to the setting of functions on locally compact groups, of which the group of rotations of a sphere and the group of automorphisms of a locally finite graph are examples. Not only do these groups frequently occur in applications, but theorems established prior to 1955 show that they are exactly the groups that support integration and differentiation.
Integration and continuity of functions on the circle and the group of rotations of the circle are the basic ingredients for Fourier analysis, which deals with convolution function algebras supported on the circle. Since these basic ingredients extend to locally compact groups, so do the methods of Fourier analysis, and the study of convolution algebras on these groups is known as harmonic analysis. Indeed, there is such a close connection between harmonic analysis and locally compact groups that any locally compact group may be recovered from the convolution algebras that it carries. This fact has recently been exploited with the creation of a theory of `locally compact quantum groups' that axiomatises properties of the algebras appearing in harmonic analysis and does away with the underlying group.
Locally compact groups have a rich structure theory in which significant advances are also currently being made. This theory divides into two cases: when the group is a connected topological space and when it is totally disconnected. The connected case has been well understood since the solution of Hilbert's Fifth Problem in the 1950's, which showed that they are essentially Lie groups. (Lie groups form the symmetries of smooth structures occurring in physics and underpinned, for example, the prediction of the existence of the Higgs boson.) For a long time it was thought that little could be said about totally disconnected groups in general, although important classes of such groups arising in number theory and as automorphism groups of graphs could be understood using techniques special to those classes. However, a complete general theory of these groups is now beginning to take shape following several breakthroughs in recent years. There is the exciting prospect that an understanding of totally disconnected groups matching that of the connected groups will be achieved in the next decade.
In this talk I will discuss a class of systems evolving over two independent variables, which we refer to as "2D". For the class considered, extensions of ODE Lyapunov stability analysis can be made to ensure different forms of stability of the system. In particular, we can describe sufficient conditions for stability in terms of the divergence of a vector Lyapunov function.
People who study geometry like to ask the question: "What is the shape of that?" In this case, the word "that" can refer to a variety of things, from triangles and circles to knots and surfaces to the universe we inhabit and beyond. In this talk, we will examine some of my favourite gems from the world of geometry and see the interplay between geometry, algebra, and theoretical physics. And the only prerequisite you will need is your imagination!
Norman Do is, first and foremost, a self-confessed maths geek! As a high school student, he represented Australia at the International Mathematical Olympiad. He completed a PhD at The University of Melbourne, before working at McGill University in Canada. He is currently a Lecturer and a DECRA Research Fellow in the School of Mathematical Sciences at Monash University.
His research lies at the interface of geometry and mathematical physics, although he is excited by almost any flavour of mathematics. Norman is heavily involved in enrichment for school students, regularly lecturing at the National Mathematics Summer School and currently chairing the Australian Mathematical Olympiad Senior Problems Committee.
This event is run in conjunction with the University of Newcastle’s 50th year anniversary celebrationsSpace! For Star Trek fans it’s the final frontier - with all of vastly, hugely, mindboggling room it contains, it allows scientists and researchers of all persuasions to go where no one has gone before and explore worlds not yet explored. Like Star Trek fans, many mathematicians and statisticians are also interested in exploring the dynamics of space. From a statisticians point of view, our often data driven perspective means we are concerned with exploring data that exists in multi-dimensional space and trying to visualise it using as few dimensions as possible.
This presentation will outline the links between the analysis of categorical data, multi-dimensional space, and the reduction of this space. The technique we explore is correspondence analysis and we shall see how eigen- and singular value decomposition fit into this data visualisation technique. We shall therefore look at some of the fundamental aspects of correspondence analysis and the various ways in which categorical data can be visualised.
I will summarize the main ingredients and results on classical conjugate duality for optimization problems, as given by Rockafellar in 1973.
I shall review convergence results for non-convex Douglas-Rachford iterations.
A look into an extension on the proof of a class of normal numbers by Davenport and Erdos, as well as a leap into the world of experimental mathematics relating to the property of strong normality, in particular the strong normality of some very famous numbers.
Inspired by the Hadamard Maximal Determinant Problem, we investigate the possible Gram matrices from rectangular {+1, -1} matrices. We can fully classify and count the Gram matrices from rectangular {+1, -1} matrices with just two rows and have conjectured a counting formula for the Gram matrices when there are more than two rows in the original matrix.
We build upon the ideas of short random walks in 2 dimensions in an attempt to understand the behaviours of these objects in higher dimensions. We explore the density and moment functions to find combinatorial and analytical results that generalise nicely.
A history of Pi in the American Mathematical Monthly and the variety of approaches to understanding this stubborn constant. I will focus on the common threads of discussion over the last century, especially the changing methods for computing pi to high precision, to illustrate how we have progressed to our current state.
In this talk I will be exploring certain aspects of permutations of length n that avoid the pattern 1324. This is an interesting pattern in that it is simple yet defies simple analysis. It can be shown that there is a growth rate, yet it cannot be shown what that growth rate is; nor has a explicit formula been found to give the number of permutations of length n which avoid the pattern (whereas this has been found for every other non Wilf-equivalent length 4 pattern). Specifically, this talk will look at how an encoding technique (developed by Bona) of the 1324 avoiding permutations was cleverly used to obtain an upper bound for the growth rate of this class.
The fairness of voting systems has been a topic of interest to mathematicians since 1770 when Marquis de Condorcet proposed the Condorcet criterion, and particularly so after 1951 when Kenneth Arrow proposed the Arrow impossibility theorem, which proved that no rank-order voting system can satisfy all properties one would desire.
The system I have been studying is known as runoff voting. It is a method of voting used around the world, often for presidential elections such as in France. Each voter selects their favourite candidate, and if any candidate receives above 50% of the vote, then they are elected. If no one reaches this, then another election will be held, but this time with only the top 2 candidates from the previous election. Whoever receives more votes in this second round will be elected. The runoff voting system satisfies a number of desired properties, though the running of the second round can have significant drawbacks. It can be very costly, it can result in periods of time without government, and in it has been known to cause unrest in some politically unstable countries.
In my research I have introduced the parameter alpha, which varies the original threshold of 50% for a candidate winning the election in the first round. I am using both analytical methods and simulation to observe how the properties change with alpha.
As an extension of Copeland and Erdos' original paper of the same title, we present a clearer and more complete version of the proof that the number of integers up to $N$ ($N$ sufficiently large) which are not $\left(\eps,k\right)$ normal is less than $N^{\gd}$ where $\gd<1$. We also conjecture that the numbers formed from the concatenation of the increasing sequence $a_{1},a_{2},a_{3},\dots$ (provided the sequence is dense enough) are not strongly normal.
We consider the problem of scattering of waves by a string with attached masses, focussing on the problem in the time-domain. We propose this as a simple model for more complicated wave scattering problems which arise in the study of elastic metamaterials. We present the governing system of equations and show how we have solved them. Some numerical simulations are also presented.
The pooling problem is a nonlinear program (NLP) with applications in the refining and petrochemical industries, but also in mining. While it has been shown that the pooling problem is strongly NP-hard, it is one of the most promising NLPs to be solved to global optimality. In this talk I will discuss strengths and weaknesses of problem formulations and solution techniques. In particular, I will discuss convex and linear relaxations of the pooling problem, and show how they are related to graph theory, polyhedral theory and combinatorial optimization.
The Fourier Transform is a central and powerful tool in signal processing as well as being essential to Complex Analysis. However, it is limited to acting on complex-valued functions and thus cannot be applied directly to colour images (which have 3 real values worth of output). In this talk, I discuss the limitations of current methods then discuss several methods of extending the Fourier Transform to larger algebras (specifically the Quaternions and Clifford algebras). This informs a research plan involving the study and computer implementation of a particular Clifford Fourier Transform.
In this conference accessible to a large public and particularly to students, we will review the most important contributions of Leonhard Euler in mathematics. We will give a brief biography of Leonhard Euler and a broad survey of his greats achievements.
Random walks have been used to model stochastic processes in many scientific fields. I will introduce invariant random walks on groups, where the transition probabilities are given by a probability measure. The Poisson boundary will also be discussed. It is a space associated with every group random walk that encapsulates the behaviour of the walks at infinity and gives a description of certain harmonic functions on the group in terms of the essentially bounded functions on the boundary. I will conclude with a discussion of project aims, namely to compute the boundary for certain random walks in new cases and to investigate the order structure of certain ideals in $L^1(G)$ defined for each invariant random walk.
Supervisors: Prof. George Willis, Dr Jeff Hogan.
Power domination problem is a variant of the famous domination problem. It has its application in monitoring of electric power networks. In this talk, we give a literature review of the work done so far and the possible open areas of research. We also introduce two interesting variants of power domination– resolving power domination problem and the propagation problem. We present preliminary work and research plan for future.
Supervisors: Prof. Mirka Miller, Dr Joe Ryan, Prof. Paul D Manuel.
I will lecture on 32 proofs of a theorem of Euler posed by mistake by Goldbach regarding Zeta(3). See http://www.carma.newcastle.edu.au/jon/goldbach-talk10.pdf.
A look into infinity, a few famous problems, and a little bit of normality.
It is well known that there is a one-to-one correspondence between signed plane graphs and link diagrams via the medial construction. The relationship was once used in knot tabulations in the early time of study in knot theory. Indeed, it provides a method of studying links using graphs. Let $G$ be a plane graph. Let $D(G)$ be the alternating link diagram corresponding to the (positive) $G$ obtained from the medial construction. A state $S$ of $D$ is a labeling each crossing of $D$ by either $A$ or $B$. Making the corresponding split for each crossing gives a number of disjoint embedded closed circles, called state circles. We call a state which possesses maximum number of state circles a maximum state. The maximum state is closely related to the genus of the corresponding link, thus has been studied. In this talk, we will discuss some of the recent progress we have made on this topic.
When attacking various difficult problems in the field of Diophantine approximation the application of certain topological games has proven extremely fruitful in recent times due to the amenable properties of the associated 'winning' sets. Other problems in Diophantine approximation have recently been solved via the method of constructing certain tree-like structures inside the Diophantine set of interest. In this talk I will discuss how one broad method of tree-like construction, namely the class of 'generalised Cantor sets', can be formalized for use in a wide variety of problems. By introducing a further class of so-called 'Cantor-winning' sets we may then provide a criterion for arbitrary sets in a metric space to satisfy the desirable properties usually attributed to winning sets, and so in some sense unify the two above approaches. Applications of this new framework include new answers to questions relating to the mixed Littlewood conjecture and the $\times2, \times3$ problem. The talk will be aimed at a broad audience.
This is joint work with our former honours student Alex Muir. We look at the variety of lengths of cycles in Cayley graphs on generalized dihedral groups.
Consider a function from the circle to itself such that the derivative is greater than one at every point. Examples are maps of the form f(x) = mx for integers m > 1. In some sense, these are the only possible examples. This fact and the corresponding question for maps on higher dimensional manifolds was a major motivation for Gromov to develop pioneering results in the field of geometric group theory.
In this talk, I'll give an overview of this and other results relating dynamical systems to the geometry of the manifolds on which they act and (time permitting) talk about my own work in the area.
In celebration of both a special "big" pi Day (3/14/15) and the 2015 centennial of the Mathematical Association of America, we review the illustrious history of the constant $\pi$ in the pages of the American Mathematical Monthly.
This talk showcases some large numbers and where they came from.
A mixed formulation for a Tresca frictional contact problem in linear elasticity is considered in the context of boundary integral equations, which is later extended to Coulomb friction. The discrete Lagrange multiplier, an approximation of the surface traction on the contact boundary part, is a linear combination of biorthogonal basis functions. The biorthogonality allows to rewrite the variational inequality constraints as a simple set of complementary problems. Thus, enabling an efficient application of a semi-smooth Newton solver for the discrete mixed problems. Typically, the solution of frictional contact problems is of reduced regularity at the interface between contact to non-contact and from stick to slip. To identify the a priori unknown locations of these interfaces a posteriori error estimations of residual and hierarchical type are introduced. For a stabilized version of our mixed formulation (with the Poincare- Steklov operator) we present also a priori estimates for the solution. Numerical results show the applicability of the error estimators and the superiority of hp-adaptivity compared to low order uniform and adaptive approaches.
Ernst Stephan is a visitor of Bishnu Lamichhane.
This is joint work with our former honours student Alex Muir. We look at the variety of lengths of cycles in Cayley graphs on generalized dihedral groups.
This week I shall conclude my discussion of pancyclicity and Cayley graphs on generalized dihedral groups.
This presentation will explore the specificities of teaching mathematics in engineering studies that transcend the division between technical, scientific and design disciplines and how students of such studies are different from traditional engineering students. Data comes from a study at the Media Technology Department of Aalborg University in Copenhagen, Denmark. Media Technology is an education that combines technology and creativity and looks at the technology behind areas such as advanced computer graphics, games, electronic music, animations, interactive art and entertainment, to name a few. During the span of the education students are given a strong technical foundation, both in theory and in practice.
The presentation emerges from research of my PhD student Evangelia Triantafylloyu and myself. The study which will be presented here used performance tests, attitude questionnaires, interviews with students and observations of mathematics related courses. The study focused on investigating student performance and retention in mathematics, attitudes towards mathematics, and preferences of teaching and learning methods, including flipped classroom approach using videos produced by course teachers. The outcome of this study can be used to create a profile of a typical student and to tailor approaches for teaching mathematics to this discipline. Moreover, it can be used as a reference point for investigating ways to improve mathematics education in other creative engineering studies.
About the Speaker: Olga Timcenko joined Medialogy department of Aalborg University in Copenhagen in fall 2006, as an Associate Professor. Before joining the University, she was a Senior Technology Consultant in LEGO Business development, LEGO Systems A/S, where she worked for different departments of LEGO on research and development of multimedia materials for children, including LEGO Digital Designer and LEGO Mindstorms NXT. She was active in FIRST LEGO League project (world-wide robotic competition among school children), and Computer-clubhouse project. During 2003-2006, she was LEGO’s team leader in EU-financed Network of excellence in Technology enhanced learning called Kaleidoscope, and actively participated in several Kaleidoscope JERIPs and SIGs. She has a Ph.D. in Robotics from Suddansk University in Odense, Denmark and she is author / co-author of 40+ conference and journal papers in the field of robotics and children and technology, and 4 international patents in the field of virtual 3D worlds / 3D user interfaces for children. Her last project for LEGO was redesign of the Mindstorms iconic programming language for children (the product was launched world-wide in August 2006).
I will discuss a combinatorial problem coming from database design. The problem can be interpreted as maximizing the number of edges in a certain hypergraph subject to a recoverability condition. It was solved recently by the high school student Max Aehle, who came up with a nice argument using the polynomial method.
Dengue is caused by four different serotypes, where individuals infected by one of the serotypes obtain lifelong immunity to that serotype but not for the other serotypes. Individuals with secondary infections may attract the more dangerous form of dengue, called dengue hemorrhagic fever (DHF), because of higher viral load. Because of unsustainability of traditional measures, the use of the bacterium Wolbachia has been proposed as an alternative strategy against dengue fever. However, little research has been conducted to study the effectiveness of this intervention in the field. Understanding the effectiveness of this intervention is of importance before it is widely implemented in the real-world. In this talk, I will explain the effectiveness of this intervention and present mathematical models that I have developed to study the effectiveness of this intervention and how these models are different to the existing one. I will also present the effects of the presence of multiple strains of dengue on dengue transmission dynamics.
Supervisors: David Allingham, Roslyn Hickson (IBM), Kathryn Glass (ANU), Irene Hudson
We will talk on the validity of the mean ergodic theorem along left Følner sequences in a countable amenable group G. Although the weak ergodic theorem always holds along any left Følner sequence in G, we will provide examples where the mean ergodic theorem fails in quite dramatic ways. On the other hand, if G does not admit any ICC quotients, e.g. if G is virtually nilpotent, then we will prove that the mean ergodic theorem does indeed hold along any left Følner sequence.
Based on the joint work with M. Bjorklund (Chalmers).
We introduce a subfamily of additive enlargements of a maximally monotone operator $T$. Our definition is inspired by the seminal
work of Fitzpatrick presented in 1988. These enlargements are a subfamily of the family of enlargements introduced by Svaiter in 2000. For the case $T = \partial f$, we prove that some members of the subfamily are smaller than the $\varepsilon$-subdifferential enlargement. For this choice of $T$, we can construct a specific enlargement which coincides with the$\varepsilon$-subdifferential. Since these enlargements are all
additive, they can be seen as structurally closer to the $\varepsilon$-subdifferential enlargement.
Joint work with Juan Enrique Martínez-Legaz (Universitat Autonoma de Barcelona), Mahboubeh Rezaei (University of Isfahan, Iran), and Michel Théra (University of Limoges).
I will explain what an equation in a free group is, why they are interesting, and how to solve them. The talk will be accessible to anyone interested in maths or computer science or logic.
I have recently [2] shown that each group $Z_2^{2m}$ gives rise to a pair of bent functions with disjoint support, whose Cayley graphs are a disjoint pair of strongly regular graphs $\Delta_m[-1]$, $\Delta_m[1]$ on $4^m$ vertices. The two strongly regular graphs are twins in the sense that they have the same parameters $(\nu, k, \lambda, \mu)$. For $m < 4$, the two strongly regular graphs are isomorphic. For $m \geq 4$, they are not isomorphic, because the size of the largest clique differs. In particular, the largest clique size of $\Delta_m[-1]$ is $\rho(2^m)$ and the largest clique in $\Delta_m[1]$ has size at least $2^m$, where $\rho(n)$ is the Hurwitz-Radon function. This non-isomorphism result disproves a number of conjectures that I made in a paper on constructions of Hadamard matrices [1].
[1] Paul Leopardi, "Constructions for Hadamard matrices, Clifford algebras, and their relation to amicability - anti-amicability graphs", Australasian Journal of Combinatorics, Volume 58(2) (2014), pp. 214–248.
[2] Paul Leopardi, "Twin bent functions and Clifford algebras", accepted 13 January 2015 by the Springer Proceedings in Mathematics and Statistics (PROMS): Algebraic Design Theory and Hadamard Matrices (ADTHM 2014).
Supervisor: Murray Elder
Supervisor: Mike Meylan
Supervisor: Wadim Zudulin
In this talk I will present the main results of my PhD thesis (by the same name), which focuses on the application of matrix determinants as a means of producing number-theoretic results.
Motivated by an investigation of properties of the Riemann zeta function, we examine the growth rate of certain determinants of zeta values. We begin with a generalisation of determinants based on the Hurwitz zeta function, where we describe the arithmetic properties of its denominator and establish an asymptotic bound. We later employ a determinant identity to bound the growth of positive Hankel determinants. Noting the positivity of determinants of Dirichlet series allows us to prove specific bounds on determinants of zeta values in particular, and of Dirichlet series in general. Our results are shown to be the best that can be obtained from our method of bounding, and we conjecture a slight improvement could be obtained from an adjustment to our specific approach.
Within the course of this investigation we also consider possible geometric properties which are necessary for the positivity of Hankel determinants, and we examine the role of Hankel determinants in irrationality proofs via their connection with Padé approximation.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
The seminar will provide a brief overview of the potential for CT to contribute to quantitative analysis in the Social Sciences. This will be followed by a description of CT as a "Rosetta Stone" linking topology, algebra, computation, and physics together. This carries over to process thinking and circuit analysis. Coecke and Parquette's approach to diagrammatic analysis is examined to emphasize the efficiency of block shifting techniques over diagram chasing. Baez and Erbele's application of CT to feedback control is the main focus of analysis and this is followed by a brief excursion into multicategories (cobordisms), before finishing up with some material on coalgebras and transition systems.
Have you ever tried to add up the numbers 1+1/2+1/3+...? If you've never thought about this before, then give it a go (and don't Google the answer!) In this talk we will settle this relatively easy question and consider how things might change if we try to thin out the sum a bit. For instance, what if we only used the prime numbers 1/2+1/3+1/5+...? Or what about the square numbers 1+1/4+1/9+...? There will be some algebra and integration at times, but if you can add fractions (or use a calculator) then you should follow almost everything.
Starting with a substitution tiling, such as the Penrose tiling, we demonstrate a method for constructing infinitely many new substitution tilings. Each of these new tilings is derived from a graph iterated function system and the tiles typically have fractal boundary. As an application of fractal tilings, we construct an odd spectral triple on a C*-algebra associated with an aperiodic substitution tiling. Even though spectral triples on substitution tilings have been extremely well studied in the last 25 years, our construction produces the first truly noncommutative spectral triple associated with a tiling. My work on fractal substitution tilings is joint with Natalie Frank and Sam Webster, and my work on spectral triples is joint with Michael Mampusti.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
In this colloquium-style presentation I will describe these combinatorial objects and how they relate to each other. Time permitting, I will also show how they can be used in other areas of Mathematics. Joint work with Sooran Kang and Samuel Webster.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
Arising originally from the analysis of a family of compressed sensing matrices, Ian Wanless and I recently investigated a number of linear algebra problems involving complex Hadamard matrices. I will discuss our main result, which relates rank-one submatrices of Hadamard matrices to the number of non-zero terms in a representation of a fixed vector with respect to two unbiased bases of a finite dimensional vector space. Only a basic knowledge of linear algebra will be assumed.
Advantages of EEG in studying brain signals include excellent temporal localization and, potentially, good spatial localization, given good models for source localization in the brain. Phase synchrony and cross-frequency coupling are two phenomena believed to indicate cooperation of different brain regions in cognition through messaging via different frequency bands. To verify these hypotheses requires ability to extract time-frequency localized components from complex multicomponent EEG data. One such method, empirical mode decompositions, shows increasing promise through engineering and we will review recent progress on this approach. Another potential method uses bases or frames of optimally time-frequency localized signals, so-called prolate spheroidal wave functions. New properties of these functions developed in joint work with Jeff Hogan will be reviewed and potential applications to EEG will be discussed.
This will be an informal talk from our UoN Engineering colleague Prof Bill McBride who recently visited some "Mid-West" Universities in the USA. Prof McBride will discuss what he saw and learnt, with reference to first year maths teaching for Engineering students.
Managing railway in general and high speed rail in particular is a very complex task which involves many different interrelated decisions in all three strategic, tactical, and operational phases. In this research two different mixed integer linear programing models are presented which are the literature's first models of their kind. In the first model a single line with two different train types is considered. In the second model a cyclic train timetabling and platforming assignment problems are considered and solved to optimality. For this model, methods for obtaining bounds on the first objective function are presented. Some pre-processing techniques to reduce the number of decision variables and constraints are also proposed. Proposed models' objectives are to minimize (1) the cyclic length, called Interval, and (2) the total journey time of all trains dispatched from their origin in each cycle. Here we explicitly consider the minimization of the cycle length using linear constraints and linear objective function. The proposed models are different from and faster than the widely-used Period Event Scheduling Problem (PESP).
In recent years, there has been quite a bit of interest in generalized Fourier transforms in Clifford analysis and in particular for the so-called Clifford-Fourier transform.
In the first part of the talk I will provide some motivation for the study of this transform. In the second part we will develop a new technique to find a closed formula for its integral kernel, based on the familiar Laplace transform. As a bonus, this yields a compact and elegant formula for the generating function of all even dimensional kernels.
I'll give an overview of some recent developments in the theory of groups of automorphisms of trees which are discrete in the full automorphism group of the tree and are locally-transitive. I'll also mention some questions which have been provoked by this work.
We generalize the Burger-Mozes universal groups acting on regular trees by prescribing the local action on balls of a given radius, and study the basic properties of this construction. We then apply our results to prove a weak version of the Goldschmidt-Sims conjecture for certain classes of primitive permutation groups.
We study maximal monotone inclusions from the perspective of (convex) gap functions.
We propose a very natural gap function and will demonstrate how this function arises from the Fitzpatrick function — a convex function used effectively to represent maximal monotone operators.
This approach allows us to use the powerful strong Fitzpatrick inequality to analyse solutions of the inclusion.
This is joint work with Joydeep Dutta.
Functions that are piecewise defined are a common sight in mathematics while convexity is a property especially desired in optimization. Suppose now a piecewise-defined function is convex on each of its defining components – when can we conclude that the entire function is convex? Our main result provides sufficient conditions for a piecewise-defined function f to be convex. We also provide a sufficient condition for checking the convexity of a piecewise linear-quadratic function, which play an important role in computer-aided convex analysis.
Based on joint work with Heinz H. Bauschke (Mathematics, UBC Okanagan) and Hung M. Phan (Mathematics, University of Massachusetts Lowell).
Mathematicians sometimes speak of the beauty of mathematics which to us is reflected in proofs and solutions for the most part. I am going to give a few proofs that I find very nice. This is stuff that post-grad discrete students certainly should know exists.
This completion talk is in two parts. In the first part, I will present a characterisation of the cyclic Douglas-Rachford method's behaviour, generalising a result which was presented in my confirmation seminar. In the second part, I will explore non-convex regularity notions in an application arising in biochemistry.
Amenability is of interest for many reasons, not least of which is its paradoxical decomposition into so many various characterisations, each equal to the whole. Two of these are the characterisation in terms of the cogrowth rate, and the existence of a Følner sequence. In exploring a known method of computing the cogrowth rate using a random walk, and by analyzing which groups seem to be pathological for this algorithm, we discover new connections between these properties.
Partitioning is a basic fundamental technique in graph theory. Graph partitioning technique is used widely to solve several combinatorial problems. We will discuss the role of edge partitioning techniques on graph embedding. The graph embedding includes some combinatorial problems such as bandwidth problem, wirelength problem, forwarding index problem etc and in addition includes some cheminformatics problems such as Wiener Index, Szeged Index, PI index etc. In this seminar, we study convex partition and its characterization. In addition, we also analyze the relationship between convex partition and some other edge partitions such as Szeged edge partition and channel edge partition. The graphs that induce convex partitions are bipartite. We will discuss the difficulties in extending this technique to non-bipartite graphs.
In either the inviscid limit of the Euler equations, or the viscously dominated limit of the Stokes equations, the determination of fluid flows can be reduced to solving singular integral equations on immersed structures and bounding surfaces. Further dimensional reduction is achieved using asymptotics when these structures are sheets or slender fibers. These reductions in dimension, and the convolutional second-kind structure of the integral equations, allows for very efficient and accurate simulations of complex fluid-structure interaction problems using solvers based on the Fast Multipole or related methods. These representations also give a natural setting for developing implicit time-stepping methods for the stiff dynamics of elastic structures moving in fluids. I'll discuss these integral formulations, their numerical treatment, and application to simulating structures moving in high-speed flows (flapping flags and flyers), and for resolving the complex interactions of many, possibly flexible, bodies moving in microscopic biological flows.
Existing of perfect matchings in regular graph is a fundamental problem in graph theory, and it closely model many real world problems such as broadcasting and network management. Recently, we have studied the number of edge disjoint perfect matching in regular graph, and using some well-known results on the existence of perfect matching and operations forcing unique perfect matchings in regular graph, we are able to make some pleasant progress. In this talk, we will present the new results and briefly discuss the proof.
Stability analysis plays a central role in nonlinear control and systems theory. Stability is, in fact, the fundamental requirement for all the practical control systems. In this research, advanced stability analysis techniques are reviewed and developed for discrete-time dynamical systems. In particular, we study the relationships between the input-to-state stability related properties and l¬2-type stability properties. These considerations naturally lead to the study of input-output models and, further, to questions of incremental stability and convergent dynamics. Future work will also outline several applications scenario for our theories including observer analysis and secure communication.
Supervisors: A/Prof. Christopher Kellett and Dr. Björn Rüffer
Noga Alon's Combinatorial Nullstellensatz, published in 1999, is a statement about polynomials in many variables and what happens if one of these vanishes over the set of common zeros of some others. In contrast to Hilbert's Nullstellensatz, it makes strong assumptions about the polynomials it is talking about, and this leads a tool for producing short and elegant proofs for numerous old and new results in combinatorial number theory and graph theory. I will present the proof of the algebraic result and some of the combinatorial applications in the 1999 paper.
After briefly describing a few more simple applications of Alon's Nullstellensatz, I will present in detail Reiher's amazing proof of the Kemnitz conjecture regarding lattice points in the plane.
Supervisors: Mirka Miller, Joe Ryan and Andrea Semanicova-Fenovcikova
We give some background to the labeling schemes like graceful, harmonious, magic, antimagic and irregular total labeling. Then we will describe why study of graph labeling is important by narrating some applications of graph labeling. Next we will briefly describe the methodology like Robert's construction to obtain completing separating systems (CSS) which will help us to determine the antimagic labeling of graphs and Alon's Combinatorial Nullstellensatz. We will illustrate an example from many applications of graphs labelling. Finally we will introduce reflexive irregular total labelling and explain its importance. To conclude, we add research plan and time line during candidature of research.
I will complete the proof of the Kemnitz conjecture and make some remarks about related zero-sum problems.
In this talk we will begin with a brief history of the mathematics of aperiodic tilings of Euclidean space, highlighting their relevance to the theory of quasicrystals. Next we will focus on an important collection of point sets, cut and project sets, which come from a dynamical construction and provide us with a mathematical model for quasicrystals. After giving definitions and examples of these sets, we will discuss their relationship with Diophantine approximation, and show how the interplay between these two subjects has recently led to new results in both of them.
Lift-and-Project operators (which map compact convex sets to compact convex sets in a certain contractive way, via higher dimensional convex representations of these sets) provide an automatic way for constructing all facets of the convex hull of 0,1 vectors in a polytope given by linear or polynomial inequalities. They also yield tractable approximations provided that the input polytope is tractable and that we only apply the operators O(1) times. There are many generalizations of the theory of these operators which can be used, in theory, to generate (eventually, in the limit) arbitrarily tight, convex relaxations of essentially arbitrary nonconvex sets. Moreover, Lift-and-Project methods provide universal ways of applying Semidefinite Programming techniques to Combinatorial Optimization problems, and in general, to nonconvex optimization problems.
I will survey some of the developments (some recent, some not so recent) that I have been involved in, especially those utilizing Lift-and-Project methods and Semidefinite Optimization. I will touch upon the connections to Convex Algebraic Geometry and present various open problems.
We propose new path-following predictor-corrector algorithms for solving convex optimization problems in conic form. The main structural properties used in our design and analysis of the algorithms hinge on some key properties of a special class of very smooth, strictly convex barrier functions. Even though our analysis has primal and dual components, our algorithms work with the dual iterates only, in the dual space. Our algorithms converge globally at the same worst-case rate as the current best polynomial-time interior-point methods. In addition, our algorithm have the local superlinear convergence property under some mild assumptions. The algorithms are based on an easily computable gradient proximity measure, which ensures an automatic transformation of the global linear rate of convergence to the locally superlinear one under some mild assumptions. Our step-size procedure for the predictor step is related to the maximum step size (the one that takes us to the boundary).
This talk is based on joint work with Yu. Nesterov.
We survey the literature on orthogonal polynomials in several variables starting from Hermite's work in the late 19th century to the works of Zernike (1920's) and Ito (1950's). We explore combinatorial and analytic properties of the Ito polynomials and offer a general class in 2 dimensions which as interesting structural properties. Connections with certain PDE's will be mentioned.
Given a finite presentation of a group, proving properties of the group can be difficult. Indeed, many questions about finitely presented groups are unsolvable in general. Algorithms exist for answering some questions while for other questions algorithms exist for verifying the truth of positive answers. An important tool in this regard is the Todd-Coxeter coset enumeration procedure. It is possible to extract formal proofs from the internal working of coset enumerations. We give examples of how this works, and show how the proofs produced can be mechanically verified and how they can be converted to alternative forms. We discuss these automatically produced proofs in terms of their size and the insights they offer. We compare them to hand proofs and to the simplest possible proofs. We point out that this technique has been used to help solve a longstanding conjecture about an infinite class of finitely presented groups.
In scanning ptychography, an unknown specimen is illuminated by a localised illumination function resulting in an exit-wave whose intensity is observed in the far-field. A ptychography dataset is a series of these observations, each of which is obtained by shifting the illumination function to a different position relative to the specimen with neighbouring illumination regions overlapping. Given a ptychographic data set, the blind ptychography problem is to simultaneously reconstruct the specimen, illumination function, and relative phase of the exit-wave. In this talk I will discuss an optimisation framework which reveals current state-of-the-art reconstruction methods in ptychography as (non-convex) alternating minimization-type algorithms. Within this framework, we provide a proof of global convergence to critical points using the Kurdyka-Łojasiewicz property.
We use random walks to experimentally compute the first few terms of the cogrowth series for a finitely presented group. We propose candidates for the amenable radical of any non-amenable group, and a Følner sequence for any amenable group, based on convergence properties of random walks.
The Hardy and Paley-Wiener Spaces are defined due to important structural theorems relating the support of a function's Fourier transform to the growth rate of the analytic extension of a function. In this talk we show that analogues of these spaces exist for Clifford-valued functions in n dimensions, using the Clifford-Fourier Transform of Brackx et al and the monogenic ($n+1$ dimensional) extension of these functions.
We consider monotone systems defined by ODEs on the positive orthant in $\mathbb{R}^n$. These systems appear in various areas of application, and we will discuss in some level of detail one of these applications related to large-scale systems stability analysis.
Lyapunov functions are frequently used in stability analysis of dynamical systems. For monotone systems so called sum- and max-separable Lyapunov functions have proven very successful. One can be written as a sum, the other as a maximum of functions of scalar arguments.
We will discuss several constructive existence results for both types of Lyapunov function. To some degree, these functions can be associated with left- and right eigenvectors of an appropriate mapping. However, and perhaps surprisingly, examples will demonstrate that stable systems may admit only one or even neither type of separable Lyapunov function.
A motion which is periodic may be considered symmetric under a transformation in time. A measure of the phase relationship these motions have with respect to a geometric figure which is symmetric under some transformation in space is presented. The implications this has on discretised patterns generated is discussed. The talk focuses on theoretical formalisms, such as those which display the fractal patterns of 'strange attractors', rather than group theory for symmetric transformations.
We study the family of self-inversive polynomials of degree $n$ whose $j$th coefficient is $\gcd(n,j)^k$, for a fixed integer $k \geq 1$. We prove that these polynomials have all of their roots on the unit circle, with uniform angular distribution. In the process we prove some new results on Jordan's totient function. We also show that these polynomials are irreducible, apart from an obvious linear factor, whenever $n$ is a power of a prime, and conjecture that this holds for all $n$. Finally we use some of these methods to obtain general results on the zero distribution of self-inversive polynomials and of their "duals" obtained from the discrete Fourier transforms of the coefficients sequence. (Joint work with Sinai Robins).
I will talk a bit about the benefits of a regular outlook.
We discuss problems of approximation of an irrational by rationals whose numerators and denominators lie in prescribed arithmetic progressions. Results are both, on the one hand, from a metrical and a non-metrical point of view, and on the other, from an asymptotic and also a uniform point of view. The principal novelty of this theory is a Khintchine-type theorem for uniform approximation in this setup. Time permitting some applications of this work will be discussed.
A dimension adaptive algorithm for sparse grid quadrature in reproducing kernel Hilbert spaces on products of spheres uses a greedy algorithm to approximately solve a down-set constrained binary knapsack problem. The talk will describe the quadrature problem, the knapsack problem and the algorithm, and will include some numerical examples.
We will be answering the following question raised by Christopher Bishop:
'Suppose we stand in a forest with tree trunks of radius $r > 0$ and no two trees centered closer than unit distance apart. Can the trees be arranged so that we can never see further than some distance $V < \infty$, no matter where we stand and what direction we look in? What is the size of $V$ in terms of $r$?'
The methods used to study this problem involve Fourier analysis and sharp estimates of exponential sums.
We consider the stability of a class of abstract positive systems originating from the recurrence analysis of stochastic systems, such as multiclass queueing networks and semimartingale reflected Brownian motions. We outline that this class of systems can also be described by differential inclusions in a natural way. We will point out that because of the positivity of the systems the set-valued map defining the differential inclusion is not upper semicontinuous in general and, thus, well-known characterizations of asymptotic stability in terms of the existence of a (smooth) Lyapunov function cannot be applied to this class of positive systems. Following an abstract approach, based on common properties of the positive systems under consideration, we show that asymptotic stability is equivalent to the existence of a Lyapunov function. Moreover, we examine the existence of smooth Lyapunov functions. Putting an assumption on the trajectories of the positive systems which demands for any trajectory the existence of a neighboring trajectory such that their difference grows linearly in time and distance of the starting points, we prove the existence of a $C^\infty$-smooth Lyapunov function. Looking at this hypothesis from the differential inclusions perspective it turns out that differential inclusions defined by Lipschitz continuous set-valued maps taking nonempty, compact and convex values have this property.
We consider identities satisfied by discrete analogues of Mehta-like integrals. The integrals are related to Selberg’s integral and the Macdonald conjectures. Our discrete analogues have the form
$$S_{\alpha,\beta,\delta} (r,n) := \sum_{k_1,...,k_r\in\mathbb{Z}} \prod_{1\leq i < j\leq r} |k_i^\alpha - k_j^\alpha|^\beta \prod_{j=1}^r |k_j|^\delta \binom{2n}{n+k_j},$$where $\alpha,\beta,\delta,r,n$ are non-negative integers subject to certain restrictions.
In the cases that we consider, it is possible to express $S_{\alpha,\beta,\delta} (r,n)$ as a product of Gamma functions and simple functions such as powers of two. For example, if $1 \leq r \leq n$, then $$S_{2,2,3} (r,n) = \prod_{j=1}^r \frac{(2n)!j!^2}{(n-j)!^2}.$$
The emphasis of the talk will be on how such identities can be obtained, with a high degree of certainty, using numerical computation. In other cases the existence of such identities can be ruled out, again with a high degree of certainty. We shall not give any proofs in detail, but will outline the ideas behind some of our proofs. These involve $q$-series identities and arguments based on non-intersecting lattice paths.
This is joint work with Christian Krattenthaler and Ole Warnaar.
The use of GPUs for scientific computation has undergone phenomenal growth over the past decade, as hardware originally designed with limited instruction sets for image generation and processing has become fully programmable and massively parallel. This talk discusses the classes of problem that can be attacked with such tools, as well as some practical aspects of implementation. A direction for future research by the speaker is also discussed.
I am going to discuss a construction of functional calculus $$f\mapsto f(A,B),$$ where $A$ and $B$ are noncommuting self-adjoint operators. I am going to discuss the problem of estimating the norms $\|f(A_1,B_1)-f(A_2,B_2)\|$, where the pair $(A_2,B_2)$ is a perturbation of the pair $(A_1,B_1)$.
We'll answer the question "What's a wavelet?" and discuss continuous wavelet transforms on the line and connections with representation theory and singular integrals. The focus will then turn to discretization techniques, including multiresolution analysis. Matrix completion problems arising from higher-dimensional wavelet constructions will also be described.
Firstly, from [1] we consider a mixed formulation for an elliptic obstacle problem for a 2nd order operator and present an hp-FE interior penalty discontinous Galerkin (IPDG) method. The primal variable is approximated by a linear combination of Gauss-Lobatto-Lagrange(GLL)-basis functions, whereas the discrete Lagrangian multiplier is a linear combination of biorthogonal basis functions. A residual based a posteriori error estimate is derived. For its construction the approximation error is split into a discretization error of a linear variational equality problem and additional consistency and obstacle condition terms.
Secondly, an hp-adaptive $C^0$-interior penalty method for the bi-Laplace obstacle problem is presented from [2]. Again we take a mixed formulation using GLL-basis functions for the primal variable and biorthogonal basis functions for the Lagrangian multiplier and present also a residual a posteriori error estimate. For both cases (2nd and 4th order obstacle problems) our numerical experiments clearly demonstrate the superior convergence of the hp-adaptive schemes compared with uniform and h-adaptive schemes.
References
[1] L.Banz, E.P.Stephan, A posteriori error estimates of hp-adaptive IPDG-FEM for elliptic
obstacle problems, Applied Numerical Mathematics 76,(2014) 76-92
[2] L.Banz, B.P.Lamichhane, E.P.Stephan, An hp-adaptive $C^0$-interior penalty method for the
obstacle problem of clamped Kirchhoff plates, preprint (2015)
(Joint work with Lothar Banz, University Salzburg, Austria)
At the 1987 Ramanujan Centenary meeting Dyson asked for a coherent group-theoretical structure for Ramanujan's mock theta functions analogous to Hecke's theory of modular forms. We extend the work of Bringmann and Ono, and Ahlgren and Treneer on answering this question.
I will explain what groups are and give some examples and applications.
This talk is devoted to three basic forms of the inverse function theorem. The classical inverse function theorem identify a smooth single-valued localization of the inverse on the condition of nonsingularity of the Jacobian.
A diophantine m-tuple is a set of m-positive integers {a_1, . . . , a_m} such that the product of any two of them plus 1 is a square. For example, {1, 3, 8, 120} is a Diophantine quadruple found by Fermat. It is known that there are infinitely many such examples with m = 4 and none with m = 6. No example is known with m = 5 but if there exist, then there are only finitely many such. In my talk, I will survey what is known about this problem, as well as its variations, where one replaces the ring of integers by the ring of integers in some finite extension of Q, or by the field of rational numbers, or one looks at a variant of this problem in the ring of polynomials with coefficients in a field of characteristic zero, or when one replaces the squares by perfect powers of a larger exponent, or by members of some other interesting sequence like the sequence of Fibonacci numbers and so on.
An order picking system in a distribution center (DC) owned by Pep Stores Ltd. (PEP) the largest single brand retailer in South Africa is investigated. Twelve independent unidirectional picking lines situated in the center of the DC are used to process all piece picking. Each picking line consists of a number of locations situated in a cyclical formation around a central conveyor belt and are serviced by multiple pickers walking in a clockwise direction.
On a daily planning level three sequential decisions tiers exist and are described as:
These sub-problems are too complex to solve together and are addressed independently and in reverse sequence using mathematical programming and heuristic techniques. It is shown that the total walking distances of pickers may be significantly reduced when solving sub-problems 1 and 3 and that there is no significant impact when solving sub-problem 2. Moreover, by introducing additional work balance and small carton minimisation objectives into sub-problem 1 better trade-offs between objectives are achieved when compared to the current practice.
In this presentation we address the issues and challenges for Future of Education and how Maplesoft is committed to offers Tools such as Möbius™ to handle these challenges. Möbius is a comprehensive online courseware environment that focuses on science, technology, engineering, and mathematics (STEM). It is built on the notion that people learn by doing. With Möbius, your students can explore important concepts using engaging, interactive applications, visualize problems and solutions, and test their understanding by answering questions that are graded instantly. Throughout the entire lesson, students remain actively engaged with the material and receive constant feedback that solidifies their understanding.
When you use Möbiusto develop and deliver your online offerings, you remain in full control of your content and the learning experience.
For more information on Möbiusplease visit http://maplesoft.com/products/Mobius/.
The Degree/Diameter Problem for graphs has its motivation in the efficient design of interconnection networks. It seeks to find the maximum possible order of a graph with a given (maximum) degree and diameter. It is known that graphs attaining the maximum possible value (the Moore bound) are extremely rare, but much activity is focussed on finding new examples of graphs or families of graph with orders approaching the bound as closely as possible. This problem was first mention in 1964 and has its motivation in the efficient design of interconnection networks. A lot of great mathematician studied this problem and obtained some results but there still remain a lot of unsolved problems about this subject. Our regretted professor Mirka Miller has given a great expansion to this problems and a lot of new results were given by her and her students. One of the problem she was recently interested in, was the Degree/Diameter problem for mixed graphs i.e. graphs in which we allow undirected edges and arcs (directed edges).
Some new result about the upper bound of this Moore mixed graph has been obtained in 2015. So this talk consists on giving the main known results about those graphs.
We will review the (now classical) scheme of basic ($q$-) hypergeometric orthogonal polynomials. It contains more than twenty families; for each family there exists at least one positive weight with respect to which the polynomials are orthogonal provided the parameter $q$ is real and lies between 0 and 1. In the talk we will describe how to reduce the scheme allowing the parameters in the families to be complex. The construction leads to new orthogonality properties or to generalization of known ones to the complex plane.
Model sets, which go back to Yves Meyer (1972), are a versatile class of structures with amazing harmonic properties. They are particularly relevant for mathematical quasicrystals. More recently, also systems such as the square-free integers or the visible lattice points have been studied in this context, leading to the theory of weak model sets. This talk will review some of the development, and introduce some of the concepts in the field.
Peter Frankl's union-closed sets conjecture, which dates back to (at least) 1979, states that for every finite family of sets which is closed under taking unions there is an element contained in at least half of the sets. Despite considerable efforts the general conjecture is still open, and the latest polymath project is an attempt to make progress. I will give an overview of equivalent variants of the conjecture and discuss known special cases and partial results.
Start labelling the vertices of the square grid with 0's and 1's with the condition that any pair of neighbouring vertices cannot both be labelled 1. If one considers the 1's to be the centres of small squares (rotated 45 degrees) then one has a picture of square-particles that cannot overlap.
This problem of "hard-squares" appears in different areas of mathematics - for example it has appeared separately as a lattice gas in statistical mechanics, as independent sets in combinatorics and as the golden-mean shift in symbolic dynamics. A core question in this model is to quantify the number of legal configurations - the entropy. In this talk I will discuss the what is known about the entropy and describe our recent work finding rigorous and precise bounds for hard-squares and related problems.
This is work together with Yao-ban Chan.
I’m going to give a summary of research projects I have been involved in over my study leave; they represent a shared theme: retailing. The projects which I’m going to talk about are:
We report upon insights gained into the BMath through: "Conversations: BMath Experiences". This is a project that was initiated by the BMath Convener in collaboration with NUMERIC. We invited first year BMath students to semi-structured conversations around their experiences in their degree. We will be sharing general insights that we have gained into the BMath through the project.
Speakers: Mike Meylan, Andrew Kepert, Liz Stojanovski and Judy-anne Osborn.
I'll continue to discuss Frankl's union-closed sets conjecture. In particular I'll present two possible approaches (local configurations and averaging) and indicate obstacles to proving the general case using these methods.
How confident are you in your choice? Such a simple but important question for people to answer. Yet, capturing how people answer this question has proven challenging for mathematical models of cognition. Part of the challenge has been that these models assume confidence is a static variable based on the same information used to make a decision. In the first part of my talk, I will review my dynamic theory of confidence, two-stage dynamic signal detection theory (2DSD). 2DSD is based on the premise that the same evidence accumulation process that underlies choice is used to make confidence judgments, but that post-decisional processing of information contributes to confidence judgments. Thus, 2DSD correctly predicts that the resolution of confidence judgments, or their ability to discriminate between correct and incorrect choices, increases over time. However, I have also found that the dynamics of confidence is driven by other factors including the very act of making a choice. In the second of the part of the talk, I will show how 2DSD and other models derived from classical stochastic theories are unable to parsimoniously account for this stable interference effect of choice. In contrast, quantum random walk modes of evidence accumulation account for this property by treating judgments and decisions as a measurement process by which a definite state is created from an indefinite state. In summary, I hope to show how better understanding the dynamic nature of confidence can provide new methods for improving the accuracy of people’s confidence, but also reveal new properties of the deliberation process including perhaps the quantum nature of evidence accumulation.
In this talk, I will outline my interest in, and results towards, the Erdős Discrepancy Problem (EDP). I came about this problem as a PhD student sometime around 2007. At the time, many of the best number theorists in the world thought that this problem would outlast the Riemann hypothesis. I had run into some interesting examples of some structured sequences with very small growth, and in some of my early talks, I outlined a way one might be able to attack the EDP. As it turns out, the solution reflected quite a bit of what I had guessed. And I say 'guessed' because I was so young and naïve that my guess was nowhere near informed enough to actually have the experience behind it to call it a conjecture. In this talk, I will go into what I was thinking and provide proof sketches of what turn out to be the extremal examples of EDP.
The discrepancy of a graph measures how evenly its edges are distributed. I will talk about a lower bound which was proved by Bollobas and Scott in 2006, and extends older results by Erdos, Goldberg, Pach and Spencer. The proof provides a nice illustration of the probabilistic method in combinatorics. If time allows I will outline how this stuff can be used to prove something about convex hulls of bilinear functions.
Mapping class groups are groups which arise naturally from homeomorphisms of surfaces. They are ubiquitous: from hyperbolic geometry, to combinatorial group theory, to algebraic geometry, to low dimensional topology, to dynamics. Even to this colloquium!
In this talk, I will give a survey of some of the highlights from this beautiful world, focusing on how mapping class groups interact with covering spaces of surfaces. In particular, we will see how a particular order 2 element (the hyperelliptic involution) and its centraliser (the hyperelliptic mapping class group) play an important role, both within the world of mapping class groups and in other areas of mathematics. If time permits, I will briefly touch on some recent joint work with Rebecca Winarski that generalises the hyperelliptic story.
No experience with mapping class groups will be assumed, and this talk will be aimed at a general mathematics audience.
B. Gordon (1961) defined sequenceable groups and G. Ringel (1974) defined R-sequenceable groups. Friedlander, Gordon and Miller conjectured that finite abelian groups are either sequenceable or R-sequenceable. The preceding definitions are special cases of what T. Kalinowski and I are calling an orthogonalizeable group, namely, a group for which every Cayley digraph on the group admits either an orthogonal directed path or an orthogonal directed cycle. I shall go over the history and current status of this topic along with a discussion about the completion of a proof of the FGM conjecture.
Start by placing piles of indistinguishable chips on the vertices of a graph. A vertex can fire if it's supercritical; i.e., if its chip count exceeds its valency. When this happens, it sends one chip to each neighbour and annihilates one chip. Initialize a game by firing all possible vertices until no supercriticals remain. Then drop chips one-by-one on randomly selected vertices, at each step firing any supercritical ones. Perhaps surprisingly, this seemingly haphazard process admits analysis. And besides having diverse applications (e.g., in modelling avalanches, earthquakes, traffic jams, and brain activity), chip-firing reaches into numerous mathematical crevices. The latter include, alphabetically, algebraic combinatorics, discrepancy theory, enumeration, graph theory, stochastic processes, and the list could go on (to zonotopes). I'll share some joint work—with Dave Perkins—that touches on a few items from this list. The talk'll be accessible to non-specialists. Promise!
I am now refereeing a manuscript on the above and I’ll tell you about its contents.
This week I shall finish my discussion of sequenceable and R-sequenceable groups.
A metric generator is a set W of vertices of a graph G such that for every pair of vertices u,v of G, there exists a vertex w in W with the condition that the length of a shortest path from u to w is different from the length of a shortest path from v to w. In this case the vertex w is said to resolve or distinguish the vertices u and v. The minimum cardinality of a metric generator for G is called the metric dimension. The metric dimension problem is to find a minimum metric generator in a graph G. In this talk I am discussing about the metric dimension and partition dimension of Cayley (di)graphs.
Lehmer's famous question concerns the existence of monic integer coefficient polynomials with Mahler measure smaller than a certain constant. Despite significant partial progress, the problem has not been fully resolved since its formulation in 1933. A powerful result independently proven by Lawton and Boyd in the 1980s establishes a connection between the classical Mahler measure of single variable polynomials and the generalized Mahler measure of multivariate polynomials. This led to speculation that it may be possible to answer Lehmer's question in the affirmative with a multivariate polynomial although the general consensus among researchers today is that no such polynomial exists. We show that each possible candidate among two variable polynomials corresponding to curves of genus 1 can be bi-rationally mapped onto a polynomial with Mahler measure greater than Lehmer's constant. Such bi-rational maps are expected to preserve the Mahler measure for large values of a certain parameter.
Milutin is a completing Honours Student of Wadim Zudilin.
In 2000, after investigating the published literature(for which I had reason then), I realised that there was clearly confusion surrounding the question of how WW2 Japanese army and navy codes had been broken by the Allies.
Fourteen years later, my academic colleague Peter Donovan and I understood why that was so: the archival documents needed to perform this task, plus the mathematical understanding needed to interpret correctly these documents, had only exposed themselves through our combined researches over this long period. The result, apart from a number of research publications in journals, is our book, "Code Breaking in the Pacific", published by Springer International in 2014.
Both the Imperial Japanese Army (IJA) and the Imperial Japanese Navy (IJN) used an encryption system involving a code book and then a second stage encipherment, a system which we call an additive cipher system, for their major codes – not a machine cipher such as the Enigma machines used widely by German forces in ww2 or the Typex/Sigaba/ECM machines used by the Allies. Thus, the type of attack needed to crack such a system is very different to those described in books about Bletchley Park and its successes against Enigma ciphers.
However, there is a singular difference: while the IJN’s main coding system, known to us as JN-25, was broken from its inception and throughout the Pacific War, yielding for example the intelligence information that enabled the battles of the Coral Sea and Midway to occur, or the shooting down of Admiral Yamamoto to be planned, the many IJA coding systems in use were, with one exception, never broken!
I will describe the general structure of additive systems, the rational way developed to attack them and its usual failure in practice, and the "miracle" that enabled JN-25 to be broken - probably the best-kept secret of the entire Pacific War: multiples of three! Good maths, but not highly technical!
Zero forcing number, Z(G), of a graph G is the minimum cardinality of a set S of black vertices (whereas vertices in V (G)\S are colored white) such that V (G) is turned black after finitely many applications of "the color-change rule": a white vertex is converted black if it is the only white neighbor of a black vertex.
Zero forcing number was introduced by the "AIM Minimum Rank – Special Graphs Work Group". In this talk, I present an overview of the results obtained from their paper.
In 1935 Erdos and Szekeres proved that there exists a function f such that among f(n) points in the plane in general position there are always n that form the vertices of a convex n-gon. More precisiely, they could prove a lower and an upper bound for f(n) and conjectured that the lower bound is sharp. After 70 years with very limited progress, there have been a couple of small improvements of the upper bound in recent years, and finally last month Andrew Suk announced a huge step forward: a proof of an asymptotic version of the conjecture.
I plan two talks on this topic: (1) a brief introduction to Ramsey theory, and (2) an outline of Suk's proof.
I continue the discussion of the Erdos-Szekeres conjecture about points in convex position with an outline of the recent proof of an asymptotic version of the conjecture.
The density of 1's in the Kolakoski sequence is conjectured to be 1/2. Proving this is an open problem in number theory. I shall cast the density question as a problem in combinatorics, and give some visualisations which may suggest ways to gain further insight into the conjecture.
The finite element method is a very popular technique to approximate solutions of partial differential equations. The mixed finite element method is a type of finite element method in which extra variables are introduced in the formulation. This introduction is useful for some problems where more than one unknowns are desirable. In this research, we will apply the mixed finite element method for some applications, such as Poisson equation, elasticity equation, and sixth-order problem. Furthermore, we also utilise the mixed finite element method to solve linear wave equation which arises from real world problem.
Let $\Sigma_d^{++}(\R)$ be the set of positive definite matrices with determinant 1 in dimension $d\ge 2$. Identifying two $SL_d(\Z)$-congruent elements in $\Sigma_d^{++}(\R)$ gives rise to the space of reduced quadratic forms of determinant one, which in turn can be identified with the locally symmetric space $X_d:=SL_d(\Z)\backslash SL_d(\R)\slash SO_d(\R)$. Equip the latter space with its natural probability measure coming from the Haar measure on $SL_d(\R)$. In 1998, Kleinbock and Margulis established very sharp estimates for the probability that an element of $X_d$ takes a value less than a given real number $\delta>0$ over the non-zero lattice points $\Z^d\backslash\{ \bm{0} \}$.
This talk will be concerned with extensions of such estimates to a large class of probability measures arising either from the spectral or the Cholesky decomposition of an element of $\Sigma_d^{++}(\R)$. The sharpness of the bounds thus obtained are also established for a subclass of these measures.
This theory has been developed with a view towards application to Information Theory. Time permitting, we will briefly introduce this topic and show how the estimates previously obtained play a crucial role in the analysis of the perfomance of communication networks.
This is work joint with Evgeniy Zorin (University of York). Dr Adiceam is a visitor of Dr Mumtaz Hussain.
The standard height function $H(\mathbf p/q) = q$ of simultaneous approximation can be calculated by taking the LCM (least common multiple) of the denominators of the coordinates of the rational points: $H(p_1/q_1,\ldots,p_d/q_d) = \mathrm{lcm}(q_1,\ldots,q_m)$. If the LCM operator is replaced by another operator such as the maximum, minimum, or product, then a different height function and thus a different theory of simultaneous approximation will result. In this talk I will discuss some basic results regarding approximation by these nonstandard height functions, as well as mentioning their connection with intrinsic approximation on Segre manifolds using standard height functions. This work is joint with Lior Fishman.
Dr Simmons is a visitor of Dr Mumtaz Hussain.
Come join us for a discussion and public forum on 'Creativity & Mathematics' at Newcastle Museum on Monday, 1st August. We've lined up world leading experts from a diverse set of disciplines to shed some light on the connection between creativity and mathematics.
It's free, but please register for catering purposes. It begins at 6:30 pm with finger food and a chat before the forum itself gets under way at 7 pm.
The panel discussion and forum will have lots of audience involvement. The panel members are from a diverse group of disciplines each concerned in some way with the relationship between creativity and mathematics. Prof. John Wilson (The University of Oxford), a leading expert on group theory, is intrigued by the similarities between mathematicians finding new ideas and composers creating new music. Prof. George Willis (University of Newcastle) will talk about the creativity of mathematics itself. Prof. Michael Ostwald will spin gold around mathematical constraints and architectural forms. A/Prof. Phillip McIntyre is an international expert on creativity and author of The Creative System in Action. He has been described as having a mind completely unpolluted by mathematics!
Come along and enjoy an evening of mental stimulation and unexpected insights. You never know: participants might walk away with a completely different view of mathematics and its place in the world.
The formation of high-mass stars (> 8 times more massive than our sun) poses an enormous challenge in modern astrophysics. Theoretically, it is difficult to understand whether the final mass of a high-mass star is accreted locally or from afar. Observationally, it is difficult to observe the early cold stages because they have relatively short lifetimes and also occur in very opaque molecular clouds. These early stages, however, can be probed by emission from molecular lines emitting at centimetre, millimetre, and sub-millimetre wavelengths. Our recent work clearly demonstrates that dense molecular clumps embedded in the filamentary "Infrared Dark Clouds" spawn high-mass stars, and these the clumps evolve as star-formation activity progresses within them. We have now identified hundreds of clumps in the earliest "pre-stellar" stage. Our MALT90 and RAMPS surveys reveal that these clumps are collapsing, confirming a prediction from "competitive accretion" models. New observations with the ALMA telescope demonstrate that turbulence--and not gravity--dominates the structure of "the Brick", the Milky Way's most massive "pre-stellar" clump.
We discuss ongoing work in convex and non-convex optimization. In the convex setting, we use symbolic computation to study problems which require minimizing a function subject to constraints. In the non-convex setting, we use a variety of computational means to study the behavior of iterated Douglas-Rachford method to solve feasibility problems, finding an element in the intersection of several sets.
For over 25 years, Wolfram Research has been serving Educators and Researchers. In the past 5 years, we have introduced many award winning technology innovations like Wolfram|Alpha Pro, Wolfram SystemModeler, Wolfram Programming Lab, and Natural Language computation. Join Craig Bauling as he guides us through the capabilities of Mathematica. Craig will demonstrate the key features that are directly applicable for use in teaching and research. Topics of this technical talk include
Prior knowledge of Mathematica is not required - new users are encouraged. Current users will benefit from seeing the many improvements and new features of Mathematica 11.
Konig (1936) asked whether every finite group G is realized as the automorphism group of a graph. Frucht answered the question in the affirmative and his answer involved graphs whose orders were substantially bigger than the orders of the groups leading to the question of finding the smallest graph with a fixed automorphism group. We shall discuss some of the early work on this problem and some recent results for the family of dihedral groups.
Many governments and international finance organisations use a carbon price in cost-benefit analyses, emissions trading schemes, quantification of energy subsidies, and modelling the impact of climate change on financial assets. The most commonly used value in this context is the social cost of carbon (SCC). Users of the social cost of carbon include the US, UK, German, and other governments, as well as organisations such as the World Bank, the International Monetary Fund, and Citigroup. Consequently, the social cost of carbon is a key factor driving worldwide investment decisions worth many trillions of dollars.
The social cost of carbon is derived using integrated assessment models that combine simplified models of the climate and the economy. One of three dominant models used in the calculation of the social cost of carbon is the Dynamic Integrated model of Climate and the Economy, or DICE. DICE contains approximately 70 parameters as well as several exogenous driving signals such as population growth and a measure of technological progress. Given the quantity of finance tied up in a figure derived from this simple highly parameterized model, understanding uncertainty in the model and capturing its effects on the social cost of carbon is of paramount importance. Indeed, in late January this year the US National Academies of Sciences, Engineering, and Medicine released a report calling for discussion on the various types of uncertainty in the overall SCC estimation approach and addressing how different models used in SCC estimation capture uncertainty.
This talk, which focuses on the DICE model, essentially consists of two parts. In Part One, I will describe the social cost of carbon and the DICE model at a high-level, and will present some interesting preliminary results relating to uncertainty and the impact of realistic constraints on emissions mitigation efforts. Part one will be accessible to a broad audience and will not require any specific technical background knowledge. In Part Two, I will provide a more detailed description of the DICE model, describe precisely how the social cost of carbon is calculated, and indicate ongoing developments aimed at improving estimates of the social cost of carbon.
This week we shall continue by introducing the cast of characters to be used for producing minimal-order graphs with dihedral automorphism group.
Maintenance plays a crucial role in the management of rail infrastructure systems as it ensures that infrastructure assets (e.g., tracks, signals, and rail crossings) are in a condition that allows safe, reliable, and efficient transport. An important and challenging problem facing planners is the scheduling of maintenance activities which must consider the movement and availability of the maintenance resources (e.g., equipment and crews). The problem can be viewed as an inventory routing problem (IRP) in which vehicles deliver product to customers so as to ensure that the customers have sufficient inventory to meet future demand. In the case of rail maintenance, the customers are the infrastructure assets, the vehicles correspond to the resources used to perform the maintenance, and the product that is in demand, the inventory of which is replenished by the vehicle, is the asset condition. To the best of our knowledge, such a viewpoint of rail maintenance has not been previously considered.
In this thesis we will study the IRP in the rail maintenance scheduling context. There are several important differences between the classical IRP and our version of the problem. Firstly, we need to differentiate between stationary and moving maintenance. Stationary maintenance can be thought of having demand for product at a specific location, or point, while moving maintenance is more like the demand for product being distributed along a line between two points. Secondly, when performing maintenance, trains may be subject to speed restrictions, be delayed, or be rerouted, all of which affect the infrastructure assets and their condition differently. Finally, the long-term maintenance schedules that are of interest are developed annually. IRPs with such a long planning horizon are intractable to direct solution approaches and therefore require the development of customised solution methodologies.
I am studying the complexity of solving equations over different algebraic objects, like free groups, virtually free groups, and hyperbolic groups. We have an NSPACE(n log n) algorithm to find solutions in free groups, which I will try to briefly explain. Applications include pattern recognition and machine learning, and first order theories in logic.
Learning to rank is a machine learning technique broadly used in many areas such as document retrieval, collaborative filtering or question answering. We present experimental results which suggest that the performance of the current state-of-the-art learning to rank algorithm LambdaMART, when used for document retrieval for search engines, can be improved if standard regression trees are replaced by oblivious trees. This paper provides a comparison of both variants and our results demonstrate that the use of oblivious trees can improve the performance by more than 2:2%. Additional experimental analysis of the inuence of a number of features and of a size of the training set is also provided and confirms the desirability of properties of oblivious decision trees.
About the Speaker: Dr Michal Ferov is a Postdoctoral Research Fellow in the School of Mathematical and Physical Sciences,Faculty of Science and Information Technology.
This afternoon (31 October) we shall complete the discussion about vertex-minimal graphs with dihedral automorphism groups. I have attached an outline of what was covered in the first two weeks.
In this talk I will present a class of C*-algebras known as "generalised Bunce-Deddens algebras" which were constructed by Kribs and Solel in 2007 from directed graphs and sequences of natural numbers. I will present answers to questions asked by Kribs and Solel about the simplicity and the classification of these C*-algebras. These results are from my PhD thesis supervised by Dave Robertson and Aidan Sims.
Today's discrete mathematics seminar is dedicated to Mirka Miller. I am going to present the beautiful Hoffman-Singleton (1964) paper which established the possible values for valencies for Moore graphs of diameter 2, gave us the Hoffman-Singleton graph of order 50, and gave us one of the intriguing still unsettled problems in combinatorics. The proof is completely linear algebra and is a proof that any serious student in discrete mathematics should see sometime. This is the general area in which Mirka made many contributions.
Targeted Audience: All early career staff and PhD students; other staff welcome
Abstract: Many of us have been involved in discussions revolving around the problem of choosing suitable thesis topics and projects for post-graduate students, honours students and vacation research students. The panel is going to present some ideas that we hope people in the audience will find useful as they get ready for or continue with their careers.
About the Speakers: Professor Brian Alspach has supervised thirteen PhDs, twenty-five MScs, nine post-doctoral fellows and a dozen undergraduate scholars over his fifty-year career. Professor Eric Beh has 20 years' international experience in the analysis of categorical data with a focus on data visualisation. He has and has, or currently is, supervised about a 10 PhD students. Dr Mike Meylan has twenty years research experience in applied mathematics both leading projects and working with others. He has supervised 5 PhD students and three post-doctoral fellows.
For a two-coloring of the vertex set of a simple graph $G$ consider the following color-change rule: a red vertex is converted to blue if it is the only red neighbor of some blue vertex. A vertex set $S$ is called zero-forcing if, starting with the vertices in $S$ blue and the vertices in the complement $V \setminus S$ red, all the vertices can be converted to blue by repeatedly applying the color-change rule. The minimum cardinality of a zero-forcing set for the graph $G$ is called the zero-forcing number of $G$, denoted by $Z(G)$.
There is a conjecture connecting zero forcing number, minimum degree $d$ and girth $g$ as follows: "If G is a graph with girth $g \geq 3$ and minimum degree $d \geq 2$, then $Z(G) \geq d+ (d-2)(g-3)$".
I shall discuss a recent paper where the conjecture is proved to be true for all graphs with girth \leq 10.
A challenge with our large-enrolment courses is to manage assessment resources: questions, quizzes, assignments and exams. We want traditional in-class assessment to be easier, quicker and more reliable to produce, in particular where multiple versions of each assessment is required. Our approach is to
We have implemented this within standard software: LaTeX, Ruby, git, and our favourite mathematics software.
We will briefly show off our achievements in 2016, including new features of the software and how we've used them in our teaching. We then invite discussion on we we can do to help our colleagues use these tools.
The Australian Council on Healthcare Standards collates data on measures of performance in a clinical setting in six-month periods. How can these data best be utilised to inform decision-making and systems improvement? What are the perils associated with collecting data in six-month periods, and how may these be addressed? Are there better ways to analyse, report and guide policy?
The Council for Aid to Education is one of many organisations internationally attempting to assess tertiary institutional performance. Value-add modelling is a technique intended to inform system performance. How valid and reliable are these techniques? Can they be improved?
Educational techniques and outreach activities are employed across the education system and the wider community for the purposes of increasing access, equity and understanding.
When new concepts are formed, a well-designed instrument to assess and provide evidence of their performance is required. Does immersion in professional experience activity enable pre-service teachers to achieve teaching standards? Do engagement activities for schools in remote and rural areas increase students’ aspirations and engagement with tertiary institutions?
Forensic anthropologists deal with the collection of bones and profiling individuals based on the remains found. How can statistics inform such decision-making?
Such questions and existing and potential answers will be discussed in the context of research collaborations with Taipei Medical University (Taiwan), Health Services Research Group, Australian Council on Healthcare Standards, Hunter Medical Research Institute, School of Education, Wollotuka Institute, School of Environmental Sciences and a Forensic Anthropologist.
Some Engel words and also commutators of commutators can be expressed as products of powers. I discuss recent work of Colin Ramsay in this area, using PEACE (Proof Extraction After Coset Enumeration), and in particular provide expressions for commutators of commutators as short products of cubes.
I will discuss how to solve free group equations using a practical computer program. Ciobanu, Diekert and Elder recently gave a theoretical algorithm which runs in nondeterministic space $n\log n$, but implementing their method as an actual computer program presents many challenges, which I will describe.
Incremental stability describes the asymptotic behavior between any two trajectories of dynamical systems. Such properties are of interest, for example, in the study of observers or synchronization of chaos. In this paper, we develop the notions of incremental stability and incremental input-to-state stability (ISS) for discrete-time systems. We derive Lyapunov function characterizations for these properties as well as a useful summation-to-summation formulation of the incremental stability property.
A graph labeling is an assignment of integers to the vertices or edges, or both, subject to certain conditions. These conditions are usually expressed by on the basis of the weights of some evaluating function. Based on these conditions there are several types of graph labelings such as graceful, magic, antimagic, sum and irregular labeling. In this research, we looking at the H-supermagic labeling of firecracker, banana tree, flower and grid graphs; the exclusive sum labelling of trees; and the edge irregularity strength of the grid graphs.
Jonathan Michael Borwein (20 May 1951 - 2 Aug 2016) had many talents, among which were his abilities to make discoveries in mathematics, to seek tenaciously for proofs of these, and to do both of those things in collegial concert with other workers. In this colloquium I shall give three examples of situation in which I had the pleasure of seeing those talents in action. They concern multiple zeta values, walks on lattices, and modular forms. In each case I shall give a notable identity, comment on its proof, and indicate further work that was provoked by the discovery. The identities in question are chosen to be comprehensible to anyone with an undergraduate education in mathematics and also to people, like myself, who lack that particular qualification.
Mathscraft is a workshop for junior high school students that aims to give them the experience of doing maths the way research mathematicians do. It is coordinated and sponsored by the ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS), and sessions are conducted by Anthony Harradine, (Prince Alfred College, Adelaide).
In a Mathscraft session there are up to 10 groups, each comprising three students (years 7-10), one teacher and one mathematician. The teams are given mathematical problems and are guided through a problem-solving process. The problems and the process are designed to mimic the mathematics that is done by research mathematicians - exploring, noticing patterns, making conjectures, proving them, figuring out why, and thinking of ways to extend the problem.
In this talk I'll describe the design of the problems and process (with examples), and explain the motivations behind them. I'll also talk about a Professional Development workshop that we ran for teachers in November last year, which had the aim of training them to run Mathscraft sessions in their own local areas. This workshop was sponsored by ACEMS and MATRIX.
In this talk, we discuss our new approach to design reverse logistics models for dairy industries, in particular whey products. Whey is a by-product of cheese making with many applications spanning from dairy and meat to pharmaceuticals. We develop a hierarchical location-routing model for a whey recovery network design. In this class of models, the location and routing decisions are made simultaneously. As the problem is NP-hard, it may not be possible to solve even the small-size instances efficiently. We suggest different approaches such as adding valid inequalities and improving lower and upper bounds to solve the problem in a reasonable amount of time.
In 1975, culminating more than 40 years of published work by Paul Erdos on the problem, he and John Selfridge proved that the product of consecutive integers cannot be a nonzero perfect power. Their proof was a remarkable combination of elementary and graph theoretic arguments. Subsequently, Erdos conjectured that this result can be generalized to a product of consecutive terms in an arithmetic progression, under certain basic assumptions. In this talk, we discuss joint work with Samir Siksek in the direction of proving Erdos' conjecture. Our approach is via techniques based upon the modularity of Galois representations, bounds for the number of supersingular primes for elliptic curves, and analytic estimates for Dirichlet character sums.
The talk will include general information on the current state of plasma fusion as an energy source and some more detailed aspects of this research area.
Gamification refers to the use of elements of games in non-game contexts and has been applied in workplace, marketing, health programs and other areas, with mounting evidence of increased interest, involvement, satisfaction and performance of the participants. More recently gamification has been emerging as a teaching method that has a great potential to improve students’ motivation and engagement. Gamification in education should not be confused with playing educational games, as it only uses concepts such as points, leader boards, etc, rather than computer games themselves. In this talk we describe the gamification of a theoretical computer science course we performed in 2014/2015/2016 as well as our experience with two other STEM courses.
There is to date no overarching classification theorem for C*-algebras, which means the theory of C*-algebras is an example-driven field of mathematics. Perhaps the most important class of examples are group C*-algebras, which are as old as the field itself. An analogous construction of C*-algebras associated to semigroups has been an active area of research among operator algebraists since Coburn’s Theorem regarding the universality of the C*-algebra generated a single isometry appeared in the 1960s. In July this year, Newcastle will host the AMSI/AustMS sponsored event "Interactions between operator algebras and semigroups". In this talk I will give a gentle introduction to the theory of semigroup C*-algebras and perhaps it will convince some of you to come along and take part in the meeting.
In this talk, I will describe the conditional value at risk (CVaR) measure used in modelling risk aversion in decision making problems.
CVaR is a highly consistent risk measure for modelling risk aversion.
I will then present two applications of CVaR. The first application considers all problems that are representable by decision trees. In this application, I show that these problems under the CVaR criterion can be solved efficiently by solving a linear program. In the second application, I consider a basic problem in the area of production planning with random yield. For this problem, I present a risk aversion model. The model is nonconvex. I present an efficient locally optimal solution method and then provide a sufficient optimality condition.
In this talk, we discuss a new approach to demand forecasting in supply chains. Demand forecasting is an inevitable task in supply chain management. Due to the endogenous and exogenous factors impact a supply chain, the regime of the supply chain may vary significantly. Such changes in the regime can bring a high volatility to demand time series and consequently, a single statistical model may not suffice to forecast the demand with a desirable level of precision. We develop a nexus between stochastic processes and statistical models to forecast the demand in supply chains with regime switching. The preliminary results on real world time series data sets are promising.
This talk gives an outline of (mostly unfinished) work done collaboratively while on sabbatical in semester 2 last year. Join me as we travel through the USA, Germany, Belgium and Austria. Your guide will share off-the-beaten-track highlights such as quaternionic splines, prolate shift systems, higher-dimensional Hardy, Paley-Wiener and Bernstein spaces, the Clifford Fourier transform, multidimensional prolates, and a Jon Borwein-inspired optimization-based approach to the construction of multidimensional wavelets. Breakfast not included.
In recent joint work on equilibrium states on semigroup C*-algebras with Afsar, Brownlowe, and Larsen, we discovered that the structure of equilibrium states admits an elegant description in terms of substructures of the original semigroup. More precisely, we consider two almost contrary subsemigroups and related features to obtain a unifying picture for a number of predating case studies. Somewhat surprisingly, all the examples from the case studies satisfy a list of four abstract properties (and are then called admissible). The nature and presence of these properties is yet to be fully understood. In this talk, I will focus on a class of examples arising as Zappa-Szép products of right LCM semigroups which showcases some interesting features. No prerequisites in operator algebras are required to follow this talk.
Totally disconnected, locally compact (t.d.l.c.) groups are a large class of topological groups that arise from a few different sources, for instance as automorphism groups of a range of algebraic and combinatorial structures, or from the study of isomorphisms between finite index subgroups of a given group. A general theory has begun to emerge in recent years, based on the interaction between small-scale and large-scale structure in t.d.l.c. groups. I will give a survey of some ways in which these groups arise and some of the tools that have been developed for understanding them.
In a way, mathematics can be seen as a language game, where we use symbols, together with some rewriting rules, to represent objects we are interested in and then ask what can be said about the sequences of symbols (languages) that capture certain phenomena. For example, given a group G with generators a and b, can we recognise (using a computer) the sequences of generators that correspond to non-trivial elements of G? If yes, how strong computer do we need, i.e. how complicated is the language we are studying?
There is a natural duality between various types of computational models and classes of languages that can be recognised by them. Until recently most problems/languages in group theory were classified within the Chomsky hierarchy, but there are more computational models to consider. In the talk I will briefly introduce L-systems, a family of classes of languages originally developed to model growth of algae, and show that the co-word problem in Grigorchuk's group, a group of particularly nice transformations of infinite binary tree, can be seen as a language corresponding to a fairly simple L-system.
Problem solving, communication and information literacy are just a few graduate attributes that employers value, yet it commonly appears that students upon graduating show only limited improvement in these areas. For instance, 3rd year students can still be thrown by relatively simple unfamiliar problems, even after working actively on numerous related exercises and problems throughout their degree. I will discuss some of the things I have implemented in my teaching to specifically target the development of student graduate attributes. My experience is with teaching mathematics, physics and engineering students, however much of my discussion will be non-discipline-specific.
I am going to look at three unsolved graph theory problems for which the same family of graphs presents a barrier to either solving or making substantial progress on the problems. The graphs in this family are called honeycomb toroidal graphs. The three problems are not closely related.
In this talk we consider a class of monotone operators which are appropriate for symbolic representation and manipulation within a computer algebra system. Various structural properties of the class (e.g., closure under taking inverses, resolvents) are investigated as well as the role played by maximal monotonicity within the class. In particular, we show that there is a natural correspondence between our class of monotone operators and the subdifferentials of convex functions belonging to a class of convex functions deemed suitable for symbolic computation of Fenchel conjugates which were previously studied by Bauschke & von Mohrenschildt and by Borwein & Hamilton. A number of illustrative computational examples utilising the introduced class of operators will be provided including computation of proximity operators, recovery of a convex penalty function associated with the hard thresholding operator, and computation of superexpectations, superdistributions and superquantiles with specialization to risk measures.
In this talk we discuss a new approach for the Hamilton cycle problem (HCP). The HCP is one of the classical problems in combinatorial mathematics. It can be stated as given a graph G, find a cycle that passes through every single vertex exactly once, or determine that such a cycle does not exist. In 1994, Filar and Krass developed a new model for HCP by embedding this problem into a Markov decision process. This approach was the motivation of a new line research which was extended by several other people afterwards. In this approach, a new polytope corresponding to a given graph G was constructed and searching for Hamiltonian cycles in a given Hamiltonian graph G was converted to searching for particular extreme points (called Hamiltonian extreme points) among extreme points of that polytope. In this research, we are going to design a Markov chain with certain properties to sample Hamiltonian extreme points of that polytope. More precisely, we would like to study a specific class of input graphs, the so-called random graphs. Some preliminary theoretical results are presented in this talk.
Part of my 2016 SSP included completion of a semi-historical review on the mathematics of W.N. Bailey, a familiar name in some combinatorics circles in relation with the "Bailey lemma" and "Bailey pairs." My personal encounters with the mathematician from the first half of the 20th century were somewhat different and more related to applications of special functions to number theory—the subject Bailey had never dealt with himself. One motivation for my writing was the place where I spent my SSP—details to be revealed in the talk. There will be some formulas displayed, sometimes scary, but they will serve as a background to historical achievements. Broad audience is welcome.
Control Lyapunov functions (CLFs) for the control of dynamical systems have faded from the spotlight over the last years even though their full potential has not been explored yet. To reactivate research on CLFs we review existing results on Lyapunov functions and (nonsmooth) CLFs in the context of stability and stabilization of nonlinear dynamical systems. Moreover, we highlight open problems and results on CLFs for destabilization. The talk concludes with ideas on Complete CLFs, which combine the concepts of stability and instability. The results presented in the talk are illustrated and motivated on the examples of a nonholonomic integrator and Artstein's circles.
We determine the Borel complexity of the topological isomorphism problem for profinite, t.d.l.c., and Roelcke precompact non-Archimedean groups, by showing it is equivalent to graph isomorphism.
For oligomorphic groups we merely establish this as an upper bound.
Joint work with Kechris and Tent.
The research interest in pattern avoiding permutations is inspired by Donald Knuth’s work in stack-sorting. According to Knuth, a permutation can be sorted by passing through a single infinite stack if and only if it avoids a sub-permutation pattern 231. Murphy extended Knuth’s research by using two infinite stacks in series and found out that the basis for generated permutations is infinite but Elder proved that the basis is finite when one of the stack is limited to depth two and the permutations are algebraic. My research is to investigate the permutations generated by a stack of depth 3 and an infinite stack in series. It is to determine the basis and nature of the permutations in term of formal language.
Lyapunov's second or direct method provides an easy-to-check sufficient condition for stability properties of equilibria. The converse question - given a stability property, does there exist an appropriate Lyapunov function? - has been fundamental in differentiating and classifying different stability properties, particularly with regards to "uniform" stability.
In this talk, I will review the usual textbook definitions for Lyapunov functions for time-varying systems and describe where they are deficient. Some interesting new sufficient (and probably necessary) conditions pop up along the way.
The theory of minimal surfaces (a.k.a. soap films) goes back to Euler’s discovery in 1741 that the catenoid is area-minimising. It is still a remarkably vibrant area of research. I will describe recent joint work with Franc Forstneric of the University of Ljubljana, Slovenia. We assemble all minimal surfaces with a given shape into a space. It is an infinite-dimensional space. What does it look like? We have been able to determine its "rough shape". I will explain what we mean by "rough shape" and describe the ingredients from complex analysis, differential topology, and homotopy theory that go into our result.
This presentation will outline my research into fitness for purpose of tertiary algebra textbooks used in Iraq in the teaching of undergraduate algebra courses with regard to the training of pre-service teachers. The project draws on work done in textbook analysis, and work done into the teaching and learning of abstract algebra and the nature of proof.
It is well recognised that for many students learning abstract algebra and the nature of proof is difficult (Selden, 2010). Courses in abstract algebra are central to many tertiary pre-service mathematics teacher programs, including in Iraq. Capaldi (2012) suggests that abstract algebra textbooks can lay the foundation for a course and greatly influence student understanding of the material. However, it has been found that there can be large differences in textbooks used, at the school level at least, in different cultures. (Alajmi A. H., 2012, Fan and Zhu, 2007, Pepin and Haggarty, 2001). For instance, Mayer and Sims, (1995) Japanese texts feature many more worked out examples than texts used in the United States for mathematics.
I will be examining the textbooks in light of theories by Harel and Sowder, and Stacey and Vincent, regarding types and proof and modes of reasoning (Stacey and Vincent, 2009) and Capaldi (2012) regarding reader's relationships with books.
The textbooks will also be examined to try to infer the underlying assumptions about pedagogies and knowledge made by the author(s). Baxter-Magolda's theory, linking forms of assessment to underlying theories of knowledge (Baxter-Magolda, 1992) will be helpful in this pursuit.
The aim of this workshop is to bring together the world's foremost experts on the theory of semigroups and their relationships to other fields of mathematics such as operator algebras and totally disconnected locally compact groups. This workshop will allow the international leaders in the field to come to Australia to teach young Australian ECRs, and to forge new collaborations with Australian mathematicians.
Details are available on the conference website.
Operator algebras associated to semigroups can be traced back to a famous theorem of Coburn from the 1960s. The theory has recently been reinvigorated through Xin Li's construction of semigroup C*-algebras. Li's construction has introduced new and interesting classes of C*-algebras, which have deep connections to number theory and dynamical systems. One connection that will be thoroughly explored through this meeting is that to the representation theory of totally disconnected locally compact groups.
I will discuss how to relate regular origami tilings to vertex models in statistical mechanics. The Miura-ori origami pattern has found many uses in engineering as an auxetic metamaterial. I analyze the effect of crease assignment defects on the long-range order properties of the Miura-ori and 4 other foldable lattices. These defects are known to affect the material's compressibility properties, so my exact results help to understand how easy it is to tune an origami metamaterial to have desired compressibility properties by introducing a set density of defects. I have found that certain origami patterns are more easily tunable than others, and conversely, the long-range ordering of some are more stable with respect to defect formation. I have also found analytical expressions for the locations of phase transition points with respect to crease assignment ordering as well as layer ordering.
Colour images are represented by functions of 2 variables that output 3 variables, and analysing them requires tools that can handle these dimensions. One method is to use Clifford Algebras and their recently discovered Fourier Transform. We prove the Clifford Fourier Transform has a Hardy Space, and that it's Paley-Wiener Space and Bernstein Spaces are identical. Another method is to find 2 dimensional wavelets that are non-separable. We achieve this through the use of the Douglas-Rachford Projection Algorithm, and hope to achieve it through the use of Proximal Alternating Linear Methods. This talk briefly overviews these methods and the path to completion.
In this talk I will briefly introduce the mixed finite element method and show their applications. I consider Poisson, elasticity, Stokes and biharmonic equations for the applications of the mixed finite element method. The mixed finite element method also arises naturally in Stokes flow, multi-physics problems as well as when we consider non-conforming discretisation techniques. I will also present my recent works on the mixed finite element method for biharmonic and Reissner-Mindlin plate equations.
I will present a brief survey of some recent results that deal with the characterization of hyperbolic dynamics in terms of the existence of appropriate Lyapunov functions. The main novelty of these results lies in the fact that they consider noninvertible and infinite-dimensional dynamics. This is a joint work with L. Barreira, C. Preda and C. Valls.
Given a sequence of integers, one would like to understand the pattern which generates the sequence, as well as its asymptotics. If the sequence is viewed as the coefficients of the series expansion of a function, called its generating function, many questions regarding the sequence can be answered more easily. If the generating function satisfies a linear ODE or a nonlinear algebraic DE, the differential equation can be found if enough terms in the sequence are given. In this talk I'll discuss my implementation in C of such a search, applications, and a systematic search of the entire Online Encyclopedia of Integer Sequences (OEIS) for generating functions.
Schoenberg’s polynomial cardinal B-splines of order $n$ provide a family of compactly supported $C^{n-2}$-functions. We present several generalizations of these B-splines, discuss their properties, and relate them to fractional difference and differentiation operators. Potential applications are mentioned.
We consider variations on the commutative diagram consisting of the Fourier transform, the Sampling Theorem and the Paley-Wiener Theorem. We start from a generalization of the Paley-Wiener theorem and consider entire functions with specific growth properties along half-lines. Our main result shows that the growth exponents are directly related to the shape of the corresponding indicator diagram, e.g., its side lengths. Since many results from sampling theory are derived with the help from a more general function theoretic point of view (the most prominent example for this is the Paley-Wiener Theorem itself), we motivate that a closer examination and understanding of the Bernstein spaces and the corresponding commutative diagrams can—via a limiting process to the straight line interval [−A,A]—yield new insights into the Lp(R)-sampling theory. This is joint work with Gunter Semmler, Technische Universität Bergakademie Freiberg, Germany.
One of the most contentious areas in Indigenising Curriculum is the Maths and Sciences. This presentation considers how Maths and Statistics can provide a solid and meaningful response to the Indigenising imperative that will fulfil the two criteria of socially just education:
Suggestions on both content areas and student recruitment, retention and success will be discussed. Examples will be based on the presenters experiences as cultural facilitators in education from Foundations to the tertiary sector.
Associate Professor Kathy Butler and Ms Tammy Small are employed in the Office of the Pro Vice-Chancellor Indigenous Education and Research at the University of Newcastle. With Professor Steve Larkin, Tammy and Kathy are currently examining ways for the University to provide cultural competency training as a whole-of-university initiative.
This presentation is to assist academics consider how to adapt programs and course content and delivery to incorporate, be mindful of and better appeal to people with Indigenous backgrounds and interests.
We present $h$ and $p$-versions of the time domain boundary element method for boundary and screen problems for the wave equation in $\mathbb{R}^3$. First, graded meshes are shown to recover optimal approximation rates for solution in the presence of edge and corner singularities on screens. Then an a posteriori error estimate is presented for general discretizations, and it gives rise to adaptive mesh refinement procedures. We also discuss preliminary results for $p$ and $hp$-versions of the time domain boundary element method. Numerical experiments illustrate the theory. Joint with H. Gimperlein and D. Stark, Heriot-Watt University, Edinburgh.
The Chebyshev conjecture is a 59-year-old open problem in the fields of analysis, optimisation, and approximation theory, positing that Chebyshev subsets of a Hilbert space must be convex. Inspired by the work of Asplund, Ficken and Klee, we investigate an equivalent formulation of this conjecture involving Chebyshev subsets of the unit sphere. We show that such sets have superior structure and use the Radon-Nikodym Property to extract some local structural results about such sets.
This presentation will discuss the megatrends, both technological and societal, that are impacting the modern supply chain. In particular, the balance between people and machines will be explored in the context of future of work within supply chains. What are the appropriate roles for robotics within the supply chain of the future? What is the future for people in the supply chain? Examples of existing and emerging technologies will be presented to show that the future supply chain is close at hand.
Groups of rooted tree automorphisms, and (weakly) branch groups in particular, have received considerable attention in the last few decades, due to the examples with unexpected properties that they provide, and their connections to dynamics and automata theory. These groups also showcase interesting phenomena in profinite group theory. I will discuss some of these and other profinite completions that one can use to study these groups, and how to find them. All these concepts will be defined in the talk.
The human brain is still one of the most powerful and at the same time most energy efficient computers. Artificial neural networks (ANN) are inspired by their biological counterparts and the workings of biological nervous systems. ANNs were among the most popular machine learning algorithms in the 1980-90s. However, after 2000 other algorithms came to be regarded as more accurate and practical. In 2012 ANNs came back with a big bang: a new form of biologically-inspired ANNs, deep convolutional neural networks, showed surprisingly good performances on image classification and object detection tasks, far superior to all other methods available. Since then deep networks have breaken records in many application domains, from object detection for autonomous vehicles to playing the game of Go and skin health diagnostics. Deep networks are currently revolutionising machine learning in academia and industry. They can be regarded as the most disruptive technology in any industry that involves machine learning, artificial intelligence, pattern recognition, data mining or control. This seminar aims at providing an overview of ANNs - old and new - with a special view towards how visualisations could help to explain how they work.
About the speaker: Stephan Chalup (Ph.D., Dipl.-Math.) is an associate professor at the University of Newcastle in Australia, where he is leading the Interdisciplinary Machine Learning Research Group and the Newcastle Robotics Lab. He studied mathematics with neuroscience at the University of Heidelberg and completed his Ph.D. in Computing Science at the Machine Learning Research Centre at Queensland University of Technology (QUT) in 2002. Stephan has published 100 research articles and is on the editorial boards of several journals. He is member of the University of Newcastle's Priority Research Centre CARMA.
Expanding the 1993 paper by Hohn and Skoruppa, and a brief exploration of Mahler measure optimal conditions.
About the speaker: Elijah Moore is a summer research student under the supervision of Wadim Zudilin.
The rapid increase in available information has led to many attempts to automatically locate patterns in large, abstract, multi-attributed information spaces. These techniques are often called data mining and have met with varying degrees of success. An alternative approach to automatic pattern detection is to keep the user in the exploration loop by developing displays that enhance their natural sensory abilities to detect patterns. This approach, whether visual, auditory, or touch based, can assist a domain expert to search their data for useful relationships. However, designing models of the abstract data and defining appropriate sensory mappings are critical tasks in building such a system. Intuitive multi-sensory displays (visual, auditory, touch) of abstract data are difficult to design and the process needs to carefully consider human perceptual and cognitive abilities. This talk will introduce a taxonomy that helps designers consider the range of sensory mappings, along with appropriate guidelines, when building such multisensory displays. To illustrate this process a case study in the domain of stock market data is also presented.
About the speaker: Keith completed his Bachelor's degree in Mathematics at Newcastle University in 1988 and his Masters in Computing in 1993. Between 1989-1999, Keith worked on applied computer research for BHP Research. His PhD examined the design of multi-sensory displays for stock market data and was completed at Sydney University in 2003. His work has received international recognition, being selected among the best visualisations and consequently exhibited at a number of international locations and reviewed in the prestigious journal Science. In 2007 he completed a post-doctoral year in Boston working at the New England Complex Systems Institute visualising health related data. He has expertise in the fields of Human Interface Design, Computer Games, Virtual Reality, Immersive Analytics, and the theory of Perception and Cognition related to the design of multi-sensory user interfaces. Keith currently works in the school of School of Electrical Engineering and Computing at the University of Newcastle, Australia where he teaches Computer Games and Programming. While his background is in Computer Science, he has also exhibited his paintings in 11 exhibits and provided lyrics for 5 CDS and a musical. You can find more about his art and science at www.knesbitt.com.
Multi-objective optimization provides decision-makers with a complete view of the trade-offs between their objective functions that are attainable by feasible solutions. Since many problems can be formulated as integer programs, the development of efficient and reliable multi-objective integer programming solvers may have significant benefits for problem solving in industry and government. However, the conjunction of multiple objectives and integrality yields problems that can be challenging to solve. So, this talk provides an overview of a few new exact as well as heuristic algorithms for this class of optimization problems. In particular, the talk focuses on computing the nondominated frontier and also the problem of optimization over the frontier. It is worth mentioning that all of the algorithms and their corresponding open-source software packages are developed in Multi-Objective Optimization Laboratory at the University of South Florida.
The finite element method has become the most powerful approach in solving partial differential equations arising in modern engineering and physical applications. We present computation and visualisation of the solutions of some applied partial differential equations using the finite element method for most of our examples. Our examples come from solid and fluid mechanics, image processing and heat conduction in sliding meshes.
About the speaker: Dr Lamichhane was awarded the MSc in Industrial Mathematics from the University of Kaiserslautern in 2001, and the PhD in Mathematics from the University of Stuttgart in 2006. He took a postdoctoral fellow at the Australian National University in 2008 and is now a senior lecturer at the University of Newcastle. Dr Lamichhane’s main interests are numeral analysis, differential equations and applied mathematics and his recent research focus is on the approximation of solutions of partial differential equations using the finite element method.
The late Professor Jonathan Borwein was fascinated by the constant
$\pi$. Some of his talks on this topic can be found on the CARMA website.
This homage to Jon is based on my talk at the Jonathan Borwein Commemorative
Conference. I will describe some algorithms for the high-precision
computation of $\pi$ and the elementary functions, with particular reference
to the book Pi and the AGM by Jon and his brother Peter Borwein.
Here "AGM" is the arithmetic-geometric mean
of Gauss and Legendre. Because the AGM has second-order convergence, it
can be combined with FFT-based fast multiplication algorithms to give fast
algorithms for the \hbox{$n$-bit} computation of $\pi$.
I will survey a few of the results and algorithms that were of interest to
Jon. In several cases they were either discovered or improved by him. If
time permits, I will also mention some new results that would have been of
interest to Jon.
The lattice Boltzmann method is used to carry out a direct numerical simulation of laminar and turbulent flows in a smooth and rough wall channel or pipe at critical and subcritical Reynolds number. The velocity field is solved using the Lattice Boltzmann Method (LBM) as an alternative numerical approach to computational Fluid dynamics. The method is successfully used to simulate more complex fluid dynamics such as thermal transportation, jet flows, electrokinetic flows and so on. The basic idea of LBM is to construct a simplified kinetic model that incorporates the essential physics of microscopic average properties, which obey the desired Navier-Stokes equations. The computation and visualization will be discussed in this seminar.
About the speaker: Dr Nisat Nowroz Anika completed her Bachelor and MSc in Applied Mathematics from Khulna University in Bangladesh at the year of 2011 and 2013 respectively. She is currently undertaking a Ph.D. in Mechanical Engineering at the University of Newcastle under the supervision of Professor Lyazid Djenidi. The major focus of her research is mixing at low Reynolds number by generating turbulence.
Constructive methods for the controller design for dynamical systems subject to bounded state constraints have only been investigated by a limited number of researchers. The construction of robust control laws is significantly more difficult compared to unconstrained problems due to the necessity of discontinuous feedback laws. A rigorous understanding of the problem is however important in obstacle or collision avoidance for mobile robots, for example. In this talk we present preliminary results on the controller design for obstacle avoidance of linear systems based on the notation of hybrid systems. In particular, we derive a discontinuous feedback law, globally stabilizing the origin while avoiding a neighborhood around an obstacle. In this context, additionally an explicit bound on the maximal size of the obstacle is provided.
In teaching mathematics, we are interested in improving students' understanding of core concepts. Students enter our classrooms as relative novices in their understanding of mathematics and one of our goals is to help them build expert understanding of mathematics. This presents us with two related problems: (1) creating effective teaching strategies designed to evolve novice thinking to expert thinking, and (2) designing and validating measures capable of assessing whether different teaching interventions improve students' conceptual understanding of mathematics. Many usual approaches to these problems make use of scoring rubrics for student work. I will discuss an experiment that highlights some of the difficulties of using scoring rubrics for this work, and then I will present an alternative approach to these problems that makes use of the law of comparative judgment, which is based on the principle of that humans are better at comparing two things against one another than they are at comparing one thing against a set of criteria (Thurstone's Law of Comparative Judgment, 1927). As part of this presentation, I will demonstrate ComPAIR, a new online tool for supporting student learning with peer feedback. ComPAIR was co-developed with a group of colleagues from the Faculty of Science, the Faculty of Arts, and the Centre for Teaching and Learning Technology at the University of British Columbia.
Complex virtual environments are used for entertainment in the form of games and are also fundamental in training and simulation environments. Apart from the visual representation of reality, these environments, and the interactions occurring between users within them, are a source of a wide variety of data. These data cover interactions such as spatio-temporal positional tracking within 3D virtual environments, to the measurement of physiological responses of users to in game events. Of particular interest are measures of visual complexity, and how these measures might be useful in determining minimum realism for affective virtual environments. This talk will consider these different data types and sources and highlight some active research areas in the analysis and visualisation of this data.
About the speaker: Dr Karen Blackmore is a Senior Lecturer in Computing at the School of Electrical Engineering and Computing, The University of Newcastle, Australia. She received her BIT (Spatial Science) With Distinction and PhD (2008) from Charles Sturt University, Australia. Dr Blackmore is a spatial scientist with research expertise in the modelling and simulation of complex social and environmental systems. Her research interests cover the use of agent-based models for simulation of socio-spatial interactions, and the use of simulation and games for serious purposes. Her research is cross-disciplinary and empirical in nature, and extends to exploration of the ways that humans engage and interact with models and simulations. Before joining the University of Newcastle, Dr Blackmore was a Research Fellow in the Department of Environment and Geography at Macquarie University, Australia and a Lecturer in the School of Information Technology, Computing and Mathematics at Charles Sturt University.
An enduring topic of research interest relates to the heritability of mental traits, such as intelligence. Some of the work on this topic has focussed on genetic contributions to the speed of cognitive processing, by examination of response times in psychometric tests. An important limitation of previous work is the underlying assumption that variability in response times solely reflects variability in the speed of cognitive processing. This assumption has been problematic in other domains, due to the confounding effects of caution and motor execution speed on observed response times. We extend a cognitive model of decision-making to account for the relatedness structure in a twin study paradigm. This approach has the potential to separately quantify different contributions to the heritability of response time: contributions from cognitive processing speed, caution, and motor execution speed. In some ways, this is a typical usage of an evidence accumulation model, and it throws up all the typical problems that we struggle with in data visualisation. Those problems will become evident during the talk, as we discuss data from the Human Connectome Project. We find that caution is both highly heritable and highly influenced by the environment, while cognitive processing speed is moderately heritable with little environmental influence, and motor execution speed appears to have no strong influence from either. Our study suggests that the assumption made in previous studies of the heritability being within mental processing speed is incorrect, with response caution actually being the most heritable part of the decision process.
Experimental discovery has long played an important role in research mathematics, even before the advent of modern computational tools. Many methods of antiquity are familiar to all of us, including the drawing of pictures to gain geometric insights and exhaustively solving similar problems in order to identify patterns. I will share a variety modern computational tools and techniques which I have used for my research at CARMA. The contexts of the discoveries will be varied -- including number theory, non-Euclidean geometry, complex analysis, and optimization -- and so the emphasis will be on the strategies employed rather than specific outcomes.
Bio: Scott Lindstrom received his master's degree from Portland State University. In September, 2015, he came to CARMA at University of Newcastle as a PhD student of Jonathan Borwein. Following Professor Borwein's untimely passing, he has continued as a student of Brailey Sims, Heinz Bauschke, and Bishnu Lamichhane. In October he will begin a postdoctoral fellowship at Hong Kong Polytechnic University. His principal research area is experimental mathematics with particular emphasis in optimization and nonlinear convex analysis. He is a member of the AustMS special interest group Mathematics of Computation and Optimization (MoCaO) and organizes the Borwein Meetings for RHD students and postdocs at CARMA.
The Discrete Element Method (DEM) is a very powerful numerical method for the simulation of unbonded and bonded granular materials, such as soil and rock. One of the unique features of this approach is that it explicitly considers the individual grains or particles and all their interactions. The DEM is an extension of the Molecular Dynamics (MD) approach. The motion of the particles is governed by Newton's second law and the rigid body dynamic equations are generally solved by applying an explicit time-stepping algorithm. Spherical particles are usually used, as this results in most efficient contact detection. Nevertheless, with the increase of computing power non-spherical particles are becoming more popular. In addition, great effort is made for coupling the method with other continuum methods to model multiphase materials. The talk discusses recent developments of the DEM in Geomechanics based on the open-source framework YADE and some of its ongoing challenges.
Bio: Klaus has more than 10 years' experience in the development of cutting-edge numerical tools for geotechnical engineering and rock mechanics applications. He obtained his PhD in civil engineering from Graz University of Technology (Austria). After moving to Australia, he expanded his initial research experience on continuum-based numerical modelling with the Boundary Element Method (BEM) and Finite Element Method (FEM) by taking on the Discrete Element Method (DEM), a discontinuum-based method. He is an active developer of the open-source DEM framework YADE (https://yade-dem.org), an efficient numerical tool for the dynamic simulation of geomaterials. Lately he has been concentrating on the development of a highly innovative framework for the modelling of deformable discrete elements.
There is an intriguing analogy between number fields and function fields. If we view classical Number Theory as the study of the ring of integers and its extensions, then function field arithmetic is the study of the ring of polynomials over a finite field and its extensions. According to this analogy, most constructions and phenomena in classical Number Theory, ranging from the elementary theorems of Euler, Fermat and Wilson, to the Riemann Hypothesis, Elliptic curves, class field theory and modular forms all have their function field analogues. I will give a panoramic tour of some of these constructions and highlight their similarities and differences to their classical counterparts.
This lecture should be accessible to advanced undergraduate students.
I will discuss the various completed, ongoing, and planned mathematics visualisation projects within CARMA's SeeLab visualisation laboratory.
Bio: Michael Assis was awarded a PhD in Statistical Mechanics at Stony Brook University in 2014, and then took a postdoctoral fellowship at the University of Melbourne. In 2017 he held a computational mathematics postdoctoral position within CARMA, and earlier this year he worked to develop CARMA's Seelab mathematics visualisation laboratory together with David Allingham.
The use of various methods to obtain close to optimal quantization leads to interesting questions about the behavior of random processes, Diophantine approximation, ergodic maps, shrinking targets, and other related constructions. The goal in all of these approaches to quantization is the speed of decrease of the error, coupled with the simplicity and concreteness of the process employed.
Let $M(n)$ be the number of distinct entries in the multiplication table for integers smaller than $n$. More precisely, $M(n) := |\{ij \mid\ 0<= i,j <n\}|$. The order of magnitude of $M(n)$ was established in a series of papers by various authors, starting with Erdös (1950) and ending with Ford (2008), but an asymptotic formula for $M(n)$ is still unknown. After describing some of the history of $M(n)$ I will consider two algorithms for computing $M(n)$ exactly for moderate values of $n$, and several Monte Carlo algorithms for estimating $M(n)$ accurately for large $n$. This leads to consideration of algorithms, due to Bach (1985-88) and Kalai (2003), for generating random factored integers - integers $r$ that are uniformly distributed in a given interval, together with the complete prime factorisation of $r$. The talk will describe ongoing work with Carl Pomerance (Dartmouth, New Hampshire) and Jonathan Webster (Butler, Indiana).
Bio: Richard Brent is a graduate of Monash and Stanford Universities. His research interests include analysis of algorithms, computational complexity, parallel algorithms, structured linear systems, and computational number theory. He has worked at IBM Research (Yorktown Heights), Stanford, Harvard, Oxford, ANU and the University of Newcastle (NSW). In 1978 he was appointed Foundation Professor of Computer Science at ANU, and in 1983 he joined the Centre for Mathematical Analysis (also at ANU). In 1998 he moved to Oxford, returning to ANU in 2005 as an ARC Federation Fellow. He was awarded the Australian Mathematical Society Medal (1984), the Hannan Medal of the Australian Academy of Science (2005), and the Moyal Medal (2014). Brent is a Fellow of the Australian Academy of Science, the Australian Mathematical Society, the IEEE, ACM, IMA, SIAM, etc. He has supervised twenty PhD students and is the author of two books and about 270 papers. In 2011 he retired from ANU and moved to Newcastle to join CARMA, at the invitation of the late Jon Borwein.
(joint work with Federico Berlai) A natural way to study infinite groups is via looking at their finite quotients. A subset S of a group G is then said to be (finitely) separable in G if we can recognise it in some finite quotient of G, meaning that for every g outside of S there is a finite quotient of G such that the image of g under the canonical projection does not belong to the image of S. We can then describe classes of groups by specifying which types of subsets do we require to be separable: residually finite groups have separable singletons, conjugacy separable groups have separable conjugacy classes of elements, cyclic subgroup separable groups have separable cyclic subgroups and so on... We could also restrict our attention only to some class of quotients, such as finite p-groups, solvable, alternating... Properties of this type are called separability properties. In case when the class of admissible quotients has reasonable closure properties we can use topological methods.
We prove that the property of being cyclic subgroup separable, that is having all cyclic subgroups closed in the profinite topology, is preserved under forming graph products.
Furthermore, we develop the tools to study the analogous question in the pro-p case. For a wide class of groups we show that the relevant cyclic subgroups - which are called p-isolated - are closed in the pro-p topology of the graph product. In particular, we show that every p-isolated cyclic subgroup of a right-angled Artin group is closed in the pro-p topology and, consequently, we show that maximal cyclic subgroups of a right-angled Artin group are p-separable for every p.
In this presentation, I describe and reflect upon teaching the mathematics sequence within the Bachelor of Science (Extended), the BSc (Ext), program at the University of Melbourne. In the introduction to the presentation, I give a brief overview of the BSc (Ext) which was established in 2015 at the University of Melbourne as a pathway program to enable Indigenous students to successfully transition into their studies in science and related areas. In the main part of the presentation, I present the key aspects of the high-expectations teaching approach that I use. This consists, firstly, of constructing a classroom context that affirms the mathematical capacities of Indigenous students, based on insights from Australian First Nations educator, Dr Chris Sarra and Professor Russell Bishop, a First Nations scholar from New Zealand. I present feedback from students and other materials to illustrate the importance of setting this learning framework. In the final section, I present student attempts and feedback from a 'Learner Generated Example' question within a specific mathematics assignment. This illustrates the advanced learning that is possible once a high-expectations context has been established.
See here for an abstract.
(joint work with R. Grigorchuk ad D. Horadam) The tree representation theorem represents a certain group associated with the scale of an automorphism of a t.d.l.c. group as acting by symmetries of a regular (unrooted) tree. It shows that groups acting on regular trees are a fundamental part of the theory of t.d.l.c. groups.
There is also an extensive theory of self-similar and self-replicating groups of symmetries of rooted trees which has developed from the discovery (or creation) of examples such as the Grigorchuk groups.
It will be seen in this talk that these two branches of research are studying essentially the same groups.
Lax-Philips scattering theory is a method to solve for scattering as an expansion over the singularities of the analytic extension of the scattering problem to complex frequencies. I will show how a complete theory can be developed in the case of simple scattering problems. I will illustrate how this theory can be used to find a numerical solution and I will illustrate the method by applying it to the vibration of ice shelves.
We introduce the notion of self-similarity for groups acting on regular rooted trees as well as their description using automata and wreath iteration. Following the definition of Grigorchuk's group we show that it is an infinite, finitely generated 2-group. The proof illustrates the use of self-similarity.
Given a profinite group $G$, we can consider the semigroup $\mathrm{End}(G)$ of continuous homomorphisms from $G$ to itself. In general $\lambda \in\mathrm{End}(G)$ can be injective but not surjective, or vice versa: consider for instance the case when $G$ is the group $F_p[[t]$ of formal power series over a finite field, $n$ is an integer, and $\lambda_n$ is the continuous endomorphism that sends $t^k$ to $t^{k+n}$ if $k+n \ge 0$ and $0$ otherwise. However, when $G$ has only finitely many open subgroups of each index (for instance, if $G$ is finitely generated), the structure of endomorphisms is much more restricted: given $\lambda \in\mathrm{End}(G)$, then $G$ can be written as a semidirect product $N \rtimes H$ of closed subgroups, where $\lambda$ acts as an automorphism on $H$ and a contracting endomorphism on $N$. When $\lambda$ is open and injective, the structure of $N$ can be restricted further using results of Glöckner and Willis (including the very recent progress that George told us about a few weeks ago). This puts some restrictions on the profinite groups that can appear as a '$V_+$' group for an automorphism of a t.d.l.c. group.
The existence of so-called dark energy and matter in the universe implies that the conventional accounting of mass and energy is incorrect. Here, we use the framework of special relativity and validation through Lorentz invariance to develop an alternative accounting of mass and energy. We assume the usual Einstein relations of special relativity, but we make the distinction between the particle energy e = mc^2 and the actual work done by the particle E*, and we adopt the perspective that it is not just the momentum vector p = mu that contributes to the work done E*, but rather the intrinsic particle energy e itself plays an important role through the combined potentials (p, e/c) as a well-defined four vector within special relativity. The resulting formulation provides a natural extension of Newton's second law, emerges as a fully consistent development of special relativity that is properly invariant under the Lorentz group, and yields an extension of Einstein's famous equation for the work done involving new terms. The new work done expressions can involve the log function, and possibly generate extremely large energies that might well represent the first formal indication of the origin of dark energy. Two alternative expressions are both well defined as a limiting case for energy-mass waves travelling at the speed of light, and are in complete accord with well-established theory for photons and light for which energy is known to vary linearly with momentum. The present formulation suggests that large energies might be generated even for slowly moving systems, and that dark energy might arise in consequence of conventional mechanical theory neglecting the work done in the direction of time.
Every compactly generated t.d.l.c. group acts vertex transitively on a locally finite graph with compact open vertex stabilisers. Such a graph is called a rough Cayley graph and, up to quasi-isometry, is an invariant for the group. This allows us to define Gromov hyperbolic t.d.l.c. groups and their Gromov boundary in a way analogous to the finitely generated case.
The space of directions of a t.d.l.c. group is a metric space 'at infinity' obtained by analysing the action of the group on the set of compact open subgroups. It is particularly useful for detecting flat subgroups, think subgroups that look like $\mathbb{Z}^n$.
In my talk, I will introduce these two concepts of boundary and give some new results which relate them. Time permitting, I may also give details about the proofs.
In this talk, we will introduce a class of tree automorphism groups known as bounded automata. From this definition, we will see that many of the interesting examples of self-similar groups in the literature are members of this class.
A problem in group theory is classifying groups based on the difficulty of solving their co-word problems, that is, classifying them by the computational difficulty to decide if a word is not equivalent to the identity. Some well-known results in this study are that a group has a co-word problem given by a regular language if and only if it is finite, a deterministic context-free language if and only if it is virtually free, and a deterministic one-counter machine if and only if it is virtually cyclic. Each of these language classes corresponds to a natural and well-studied model of computation.
We will show that the class of bounded automata groups has a co-word problem given by an ET0L language – a class of formal language which has recently gained popularity in areas of group theory. This strengthens a recent result of Holt and Röver (who showed this result for a less restrictive class of language) and extends a result of Ciobanu-Elder-Ferov (who proved this result for the first Grigorchuk group).
In this talk, I will discuss a simple modification of the forward-backward splitting method for finding a zero in the sum of two monotone operators. This modified method converges under the same assumptions as Tseng's forward-backward-forward method, namely, it does not require cocoercivity of the single-valued operator but rather only Lipschitz continuity. Each of iteration of the method only requires one forward evaluation rather than two as is the case in Tseng's method. Variants incorporating a linesearch, an inertial term, or a structured three operator inclusion will also be discussed. Based on joint work with Yura Malitsky (University of Göttingen).
I will describe the relationship between self-similar groups, permutational bimodules and virtual group endomorphisms. Based on chapter 2 of Nekrashevych’s book.
We consider a class of linear operator equations that includes systems of ordinary differential equations, difference equations and fractional-order ordinary differential equations. This class also includes Fredholm integral equations, operator exponentials and powers, as well as eigenvalue problems. We generalise the idea of a fundamental matrix and provide an explicit method for obtaining an exact series solution to these types of operator equations, together with sufficient conditions for convergence and error bounds. Illustrative examples are also given.
Let X be an algebraic curve over an algebraically closed field of characteristic two. We will prove that for any such curve X, there exists a tamely ramified morphism from X to the projective line. The assertion is closely related to Belyi's theorem. In this talk, we first recall Belyi's theorem and its positive characteristic analogue. Next we introduce a key notion called ``pseudo-tame", which plays an important role in our proof and we prove that there exists a pseudo-tamely ramified morphism from X to the projective line by showing that an obstruction class vanishes. If time permits, we give a way to construct a tamely ramified morphism from X to the projective line by using a pseudo-tamely ramified morphism. This is a joint work with Seidai Yasuda of Osaka university.
Given a group one of the most natural things one can study about it is its subgroup lattice, and the maximal subgroups take a prominent role. If the group is infinite, one can ask whether all maximal subgroups have finite index or whether there are some (and how many) of infinite index. After telling some historical developments on this question, I will motivate the study of maximal subgroups of groups of intermediate growth and report on joint work with Dominik Francoeur where we give a complete description of all maximal subgroups of some "siblings" of Grigorchuk's group.
The divergence of a pair of geodesics in a metric space measures how fast they spread apart. For example, in Euclidean space all pairs of geodesics diverge linearly, while in hyperbolic space all pairs of geodesics diverge exponentially. In the 1980s Gromov proved that in symmetric spaces of non-compact type, the only possible divergence rates are linear or exponential, and he asked whether the same dichotomy holds in CAT(0) spaces. Soon afterwards, Gersten used these ideas to define a quasi-isometry invariant, also called divergence, which measures the "worst" rate of divergence. Gersten and others have since found many examples of finitely generated groups with quadratic divergence. We study divergence in right-angled Coxeter groups with triangle-free defining graphs. Using the structure of certain flats in the associated Davis complex, which is a CAT(0) square complex, we characterise such groups with linear and quadratic divergence, and construct examples of right-angled Coxeter groups with divergence polynomial of arbitrary degree. This is joint work with Pallavi Dani (Louisiana State University).
In the first part of this seminar, I will present some geometric cocycles associated to trees and ways to compute their norms. Similar construction exists for Euclidean buildings but no satisfactory estimates of the norm is currently known. In the second part, I will discuss some ongoing research with Thibaut Pillon on actions the infinite cyclic group by piecewise translations on locally compact group. Piecewise translation actions have been well studied for finitely generated groups, e.g. by Whyte, and provide positive answers to the von-Neumann-Day problem or the Burnside problem. The generalization to LC-groups was introduced by Schneider. The topic seems to have interesting implications for tdlc-groups
Hausdorff dimension has become a standard tool to measure the "size" of fractals in real space. However, it can be defined on any metric space and therefore can be used to measure the "size" of subgroups of, say, pro-p groups (with respect to a chosen metric). This line of investigation was started 20 years ago by Barnea and Shalev, who showed that p-adic analytic groups do not have any "fractal" subgroups, and asked whether this characterises them among finitely generated pro-p groups. I will explain what all of this means and report on joint work with Oihana Garaialde and Benjamin Klopsch in which, while trying to solve this problem, we ended up showing an analogue of a theorem of Schreier in the context of pro-p groups of positive rank gradient: any finitely generated infinite normal subgroup of a pro-p group of positive rank gradient is of finite index. I will also explain what "positive rank gradient" means, and why pro-p groups with such a property are "free-like".
The QUT porous media modelling group has developed a number of fruitful collaborations with industry partners over the last 20 years where large-scale computations were utilised to investigate and optimise operations. In this lecture I will reflect on the rich experience of working with industry — from commercial research to work integrated learning for final year students. A pleasing outcome is the impact our research has had on industry practices. A selection of our past modelling projects will be reviewed, including:
I will also provide a brief survey of the computational solution strategies employed for the models.
Brief Biography: Ian Turner is a professor of computational mathematics in the School of Mathematical Sciences at the Queensland University of Technology. His main research interests are in the fields of computational mathematics and numerical analysis, where he has over thirty years experience in solving systems of coupled, nonlinear partial differential equations that govern flow in porous media. He has published over 250 research articles in a wide cross section of journals spanning science and engineering, and his multidisciplinary research demonstrates a strong interaction with industry. He is also a former Head of School of Mathematical Sciences at QUT. Recently, Professor Turner was named in the 2015, 2016 Thomson Reuters and 2017 Clarivate Analytics Web of Science list of Highly Cited Researchers.
Almost by definition, the main tool and goal of Geometric Group Theory is to find and exploit correspondences between geometric and algebraic features of groups. Following this philosophy, I will focus on the question: what does it mean for a sub(space/group) to "sit nicely" inside a bigger (space/group)?
Focusing on groups, for a subgroup H of a group G, possible answers for the above question are when the subgroup H is: quasi-isometrically embedded, undistorted, normal/malnormal, finitely generated, geometrically separated...
Many of the above are equivalent when H is a quasiconvex subgroup of a hyperbolic group G, providing very successful correspondences between geometric and algebraic properties of subgroups.
The goal of this talk is to review quasiconvexity in hyperbolic spaces and try to generalize several of those features in a broader setting, namely the class of hierarchically hyperbolic groups (HHG). This is a joint work with Hung C. Tran and Jacob Russell.
During my study leave in 2018 I have applied nonlinear stability analysis techniques to the Douglas-Rachford Algorithm, with the aim of shedding light on the interesting non-convex case, where convergence is often observed but seldom proven. The Douglas-Rachford Algorithm can solve optimisation and feasibility problems, provably converges weakly to solutions in the convex case, and constitutes a practical heuristic in non-convex cases. Lyapunov functions are stability certificates for difference inclusions in nonlinear stability analysis. Some other recent nonlinear stability results are showcased as well.
Zombies are a popular figure in pop culture/entertainment and they are usually portrayed as being brought about through an outbreak or epidemic. Consequently, we model a zombie attack, using biological assumptions based on popular zombie movies. We introduce a basic model for zombie infection, determine equilibria and their stability, and illustrate the outcome with numerical solutions. We then refine the model to introduce a latent period of zombification, whereby humans are infected, but not infectious, before becoming undead. We then modify the model to include the effects of possible quarantine or a cure. Finally, we examine the impact of regular, impulsive reductions in the number of zombies and derive conditions under which eradication can occur. We show that only quick, aggressive attacks can stave off the doomsday scenario: the collapse of society as zombies overtake us all.
In this talk, I will show how to build $C^*$-algebras using a family of local homeomorphisms. Then we will compute the KMS states of the resulted algebras using Laca-Neshveyev machinery. Then I will apply this result to $C^*$-algebras of $K$-graphs and obtain interesting $C^*$-algebraic information about $k$-graph algebras. This talk is based on a joint work with Astrid an Huef and Iain Raeburn.
The KMS condition for equilibrium states of C*-dynamical systems has been around since the 1960’s. With the introduction of systems arising from number theory and from semigroup dynamics following pioneering work of Bost and Connes, their study has accelerated significantly in the last 25 years. I will give a brief introduction to C*-dynamical systems and their KMS states and discuss two constructions that exhibit fascinating connections with key open questions in mathematics such as Hilbert’s 12th problem on explicit class field theory and Furstenberg’s x2 x3 conjecture.
Using a variant of the Laca-Raeburn program for calculating KMS states, Laca, Raeburn, Ramagge and Whittaker showed that, at any inverse temperature above a critical value, the KMS states arising from self-similar actions of groups (or groupoids) $G$ are parameterised by traces on C*(G). The parameterisation takes the form of a self-mapping \chi of the trace space of C*(G) that is built from the structure of the stabilisers of the self-similar action. I will outline how this works, and then sketch how to see that \chi has a unique fixed-point, which picks out the ``preferred" trace of C*(G) corresponding to the only KMS state that persists at the critical inverse temperature. The first part of this will be an exposition of results of Laca-Raeburn-Ramagge-Whittaker. The second part is joint work with Joan Claramunt.
The problem of packing space with regular tetrahedra has a 2000 year history. This talk surveys the history of work on the problem. It includes work by mathematicians, computer scientists, physicists, chemists, and materials scientists. Much progress has been made on it in recent years, yet there remain many unsolved problems.
In this talk, we will present a brief overview of mathematical diffraction of structures with no translational symmetry but are not ruled out to exhibit long-range order. We introduce aperiodic tilings as toy models for such structures and discuss the relevant measure-theoretic formulation of the diffraction analysis. In particular, we focus on the component of the diffraction that suggests stochasticity but can be non-trivial for deterministic systems, and how its absence can be confirmed using some techniques involving Lyapunov exponents and Mahler measures. This is joint work with Michael Baake, Michael Coons, Franz Gaehler and Uwe Grimm.
Mahler's method in number theory is an area wherein one answers questions surrounding the transcendence and algebraic independence of both power series $F(z)$, which satisfy the functional equation $$a_0(z)F(z)+a_1(z)F(z^k)+\cdots+a_d(z)F(z^{k^d})=0$$ for some integers $k\geqslant 2$ and $d\geqslant 1$ and polynomials $a_0(z),\ldots,a_d(z)$, and their special values $F(\alpha)$, typically at algebraic numbers $\alpha$. The most important examples of Mahler functions arise from important sequences in theoretical computer science and dynamical systems, and many are related to digital properties of sets of numbers. For example, the generating function $T(z)$ of the Thue-Morse sequence, which is known to be the fixed point of a uniform morphism in computer science or equivalently a constant-length substitution system in dynamics, is a Mahler function. In 1930, Mahler proved that the numbers $T(\alpha)$ are transcendental for all non-zero algebraic numbers $\alpha$ in the complex open unit disc. With digital computers and computation so prevalent in our society, such results seem almost second nature these days and thinking about them is very natural. But what is one really trying to communicate by proving that functions or numbers such as those considered in Mahler's method?
In this talk, highlighting work from the very beginning of Mahler's career, we speculate---and provide some variations---on what Mahler was really trying to understand. This talk will combine modern and historical methods and will be accessible to students.
This project aims to investigate algebraic objects known as 0-dimensional groups, which are a mathematical tool for analysing the symmetry of infinite networks. Group theory has been used to classify possible types of symmetry in various contexts for nearly two centuries now, and 0-dimensional groups are the current frontier of knowledge. The expected outcome of the project is that the understanding of the abstract groups will be substantially advanced, and that this understanding will shed light on structures possessing 0-dimensional symmetry. In addition to being cultural achievements in their own right, advances in group theory such as this also often have significant translational benefits. This will provide benefits such as the creation of tools relevant to information science and researchers trained in the use of these tools.
The project aims to develop novel techniques to investigate Geometric analysis on infinite dimensional bundles, as well as Geometric analysis of pathological spaces with Cantor set as fibre, that arise in models for the fractional quantum Hall effect and topological matter, areas recognised with the 1998 and 2016 Nobel Prizes. Building on the applicant's expertise in the area, the project will involve postgraduate and postdoctoral training in order to enhance Australia's position at the forefront of international research in Geometric Analysis. Ultimately, the project will enhance Australia's leading position in the area of Index Theory by developing novel techniques to solve challenging conjectures, and mentoring HDR students and ECRs.
This project aims to solve hard, outstanding problems which have impeded our ability to progress in the area of quantum or noncommutative calculus. Calculus has provided an invaluable tool to science, enabling scientific and technological revolutions throughout the past two centuries. The project will initiate a program of collaboration among top mathematical researchers from around the world and bring together two separate mathematical areas into a powerful new set of tools. The outcomes from the project will impact research at the forefront of mathematical physics and other sciences and enhance Australia's reputation and standing.
Imagine a world, where physical and chemical laboratories are unnecessary, because all experiments can be simulated accurately on a computer. In principle this is possible, solving the quantum mechanical Schrödinger equation. Unfortunately, this is far from trivial and practically impossible for large and complex materials and reactions. In 1998, Walter Kohn and John A Pople won the Nobel Prize in Chemistry for developing the density-functional theory (DFT). DFT allows to find solutions for the Schrödinger equation much more efficiently than ab-initio and similar approaches, thus enabling the computation of materials properties in an unprecedented way. In this seminar, I will introduce quantum mechanical principles and the basic idea of the DFT. Then, I will present an example of the computational elucidation of a reaction mechanism in materials science.
The old joke is that a topologist can’t distinguish between a coffee cup and a doughnut. A recent variant of Homology, called Persistent Homology, can be used in data analysis to understand the shape of data. I will give an introduction to persistent Homology and describe two example applications of this tool.
I introduce and demonstrate the Coq assisted theorem prover.
It is commonly expected that $e$, $\log 2$, $\sqrt{2}$, among other « classical » numbers, behave, in many respects, like almost all real numbers. For instance, their decimal expansion should contain every finite block of digits from $\{0, \ldots , 9\}$. We are very far away from establishing such a strong assertion. However, there has been some small recent progress in that direction. Let $\xi$ be an irrational real number. Its irrationality exponent, denoted by $\mu (\xi)$, is the supremum of the real numbers $\mu$ for which there are infinitely many integer pairs $(p, q)$ such that $|\xi - \frac{p}{q}| < q^{-\mu}$. It measures the quality of approximation to $\xi$ by rationals. We always have $\mu (\xi) \ge 2$, with equality for almost all real numbers and for irrational algebraic numbers (by Roth's theorem). We prove that, if the irrationality exponent of $\xi$ is equal to $2$ or slightly greater than $2$, then the decimal expansion of $\xi$ cannot be `too simple', in a suitable sense. Our result applies, among other classical numbers, to badly approximable numbers, non-zero rational powers of ${{\rm e}}$, and $\log (1 + \frac{1}{a})$, provided that the integer $a$ is sufficiently large. It establishes an unexpected connection between the irrationality exponent of a real number and its decimal expansion.
Motivating in constructing conformal field theories Jones recently discovered a very general process that produces actions of the Thompson groups $F$,$T$ and $V$ such as unitary representations or actions on $C^{\ast}$-algebras. I will give a general panorama of this construction along with many examples and present various applications regarding analytical properties of groups and, if time permits, in lattice theory (e.g. quantum field theory).
Let $t$ be the the multiplicative inverse of the golden mean. In 1995 Sean Cleary introduced the irrational-slope Thompson's group $F_t$, which is the group of piecewise-linear maps of the interval $[0,1]$ with breaks in $Z[t]$ and slopes powers of $t$. In this talk we describe this group using tree-pair diagrams, and then demonstrate a finite presentation, a normal form, and prove that its commutator subgroup is simple. This group is the first example of a group of piecewise-linear maps of the interval whose abelianisation has torsion, and it is an open problem whether this group is a subgroup of Thompson's group $F$.
A Jonsson-Tarski algebra is a set X endowed with an
isomorphism $X\to XxX$. As observed by Freyd, the category of
Jonsson-Tarski algebras is a Grothendieck topos - a highly structured
mathematical object which is at once a generalised topological space,
and a generalised universe of sets.
In particular, one can do algebra, topology and functional analysis
inside the Jonsson-Tarski topos, and on doing so, the following objects
simply pop out: Cantor space; Thompson's group V; the Leavitt algebra
L2; the Cuntz semigroup S2; and the reduced $C^{\ast}-algebra of S2. The first
objective of this talk is to explain how this happens.
The second objective is to describe other "self-similar toposes"
associated to, for example, self-similar group actions, directed graphs
and higher-rank graphs; and again, each such topos contains within it a
familiar menagerie of algebraic-analytic objects. If time permits, I
will also explain a further intriguing example which gives rise to
Thompson's group F and, I suspect, the Farey AF algebra.
No expertise in topos theory is required; such background as is
necessary will be developed in the talk.
Sea ice acts as a refrigerator for the world. Its bright surface reflects solar heat, and the salt it expels during the freezing process drives cold water towards the equator. As a result, sea ice plays a crucial role in our climate system. Antarctic sea-ice extent has shown a large degree of regional variability, in stark contrast with the steady decreasing trend found in the Arctic. This variability is within the ranges of natural fluctuations, and may be ascribed to the high incidence of weather extremes, like intense cyclones, that give rise to large waves, significant wind drag, and ice deformation. The role exerted by waves on sea ice is still particular enigmatic and it has attracted a lot of attention over the past years. Starting from theoretical knowledge, new understanding based on experimental models and computational fluid dynamics is presented. But exploration of waves-in-ice cannot be exhausted without being on the field. And this is why I found myself in the middle of the Southern Ocean during a category five polar cyclone to measure waves…
The models of collective decision-making considered in this presentation are nonlinear interconnected systems with saturating interactions, similar to Hopfield newtorks. These systems encode the possible outcomes of a decision process into different steady states of the dynamics. When the model is cooperative, i.e., when the underlying adjacency graph is Metzler, then the system is characterized by the presence of two main attractors, one positive and the other negative, representing two choices of agreement among the agents, associated to the Perron-Frobenius eigenvector of the system. Such equilibria are achieved when there is a sufficiently high 'social commitment' among the agent (here interpreted as a bifurcation parameter). When instead cooperation and antagonism coexist, the resulting signed graph is in general not structurally balanced, meaning that Perron-Frobenius theorem does not apply directly. It is shown that the decision-making process is affected by the distance to structural balance, in the sense that the higher the frustration of the graph, the higher the commitment strength at which the system bifurcates. In both cases, it is possible to give conditions on the commitment strength beyond which other equilibria start to appear. These extra bifurcations are related to the algebraic connectivity of the graph.
We investigate the construction of multidimensional prolate spheroidal wave functions using techniques from Clifford analysis. The prolates are eigenfunctions of a time-frequency limiting operator, but we show that they are also eigenfunctions of a differential operator. In an effort to compute solutions of this operator, we prove a Bonnet formula for a class of Clifford-Gegenbauer polynomials.
We discuss various optimisation-based approaches to machine learning. Tasks include regression, clustering, and classification. We discuss frequently used terms like 'unsupervised learning,' 'penalty methods,' and 'dual problem.' We motivate our discussion with simple examples and visualisations.
Calculus of variations is utilized to minimize the elastic energy arising from the curvature squared while maximizing the van der Waals energy. Firstly, the shape of folded graphene sheets is investigated, and an arbitrary constant arising by integrating the Euler–Lagrange equation is determined. In this study, the structure is assumed to have a translational symmetry along the fold, so that the problem may be reduced to a two dimensional problem with reflective symmetry across the fold. Secondly, both variational calculus technique and least squared minimization procedure are employed to determine the joining structure involved a C60 fullerene and a carbon nanotube, namely a nanobud. We find that these two methods are in reasonable overall agreement. However, there is no experimental or simulation data to determine which procedure gives the more realistic results.
For linear and nonlinear dynamical systems, control problems such as feedback stabilization of target sets and feedback laws guaranteeing obstacle avoidance are topics of interest throughout the control literature. While the isolated problems (i.e., guaranteeing only stability or avoidance) are well understood, the combined control problem guaranteeing stability and avoidance simultaneously is leading to significant challenges even in the case of linear systems. In this talk we highlight difficulties in the controller design with conflicting objectives in terms of guaranteed avoidance of bounded sets and asymptotic stability of the origin. In addition, using the framework of hybrid systems, we propose a partial solution to the combined control problem for underactuated linear systems.
In this talk, I will survey some of the famous quotient algorithms that can be used to compute efficiently with finitely presented groups. The last part of the talk will be about joint work with Alexander Hulpke (Colorado State University): we have looked at quotient algorithms for non-solvable groups, and I will report on the findings so far.
In computer science, an isomorphism testing problem asks whether two objects are in the same orbit under a group action. The most famous problem of this type has been the graph isomorphism problem. In late 2015, L. Babai announced a quasipolynomial-time algorithm for the graph isomorphism problem, which is widely regarded as a breakthrough in theoretical computer science. This leads to a natural question, that is, which isomorphism testing problems should naturally draw our attention for further exploration?
The Galois group of a polynomial is the automorphism group of its splitting field. These automorphisms act by permuting the roots of the polynomial so that a Galois group will be a subgroup of a symmetric group. Using the Galois group the splitting field of a polynomial can be computed more efficiently than otherwise, using the knowledge of the symmetries of the roots. I will present an algorithm developed by Fieker and Klueners, which I have extended, for computing Galois groups of polynomials over arithmetic fields as well as approaches to computing splitting fields using the symmetries of the roots.
The history of projection methods goes back to von Neumann and his method of alternating projections for finding a point in the intersection of two linear subspaces. These days the method of alternating projections and its various modifications, such as the Douglas-Rachford algorithm, are successfully used to solve challenging feasibility and optimisation problems. The convergence of projection methods (and its rate) depends on the structure of the sets that comprise the feasibility problem, and also on their position relative to each other. I will survey a selection of results, focusing on the impact of the geometry of the sets on the convergence.
In the past decade, the research area of arithmetic dynamics has grown in prominence. This area considers iterated maps as dynamical systems, acting on the integers, the rationals or on finite fields (meaning there is a finite phase space in the last case). Tools used to investigate arithmetic dynamics include combinatorics, arithmetic geometry, number theory, graph theory as well as numerical experimentation. There are important applications of arithmetic dynamical systems in cryptography. I will survey some of our investigations in arithmetic dynamics which have been motivated by the order and chaos divide in Hamiltonian dynamics.
Recently, second-order methods have shown great success in a variety of machine learning applications. However, establishing convergence of the canonical member of this class, i.e., the celebrated Newton's method, has long been limited to making restrictive assumptions on (strong) convexity. Furthermore, smoothness assumptions, such as Lipschitz continuity of the gradient/Hessian, have always been an integral part of the analysis. In fact, it is widely believed that in the absence of well-behaved and continuous Hessian, the application of curvature can hurt more so that it can help. This has in turn limited the application range of the classical Newton’s method in machine learning. To set the scene, we first briefly highlight some recent results, which shed light on the advantages of Newton-type methods for machine learning, as compared with first-order alternatives. We then turn our focus to a new member of this class, Newton-MR, which is derived using two seemingly simple modifications of the classical Newton’s method. We show that, unlike the classical Newton’s method, Newton-MR can be applied, beyond the traditional convex settings, to invex problems. Newton-MR appears almost indistinguishable from its classical counterpart, yet it offers a diverse range of algorithmic and theoretical advantages. Furthermore, by introducing a weaker notion of joint regularity of Hessian and gradient, we show that Newton-MR converges globally even in the absence of the traditional smoothness assumptions. Finally, we obtain local convergence results in terms of the distance to the set of optimal solutions. This greatly relaxes the notion of “isolated minimum”, which is required for the local convergence analysis of the classical Newton’s method. Numerical simulations using several machine learning problems demonstrate the great potential of Newton-MR as compared with several other second-order methods.
Joris van der Hoeven and I recently discovered an algorithm that computes the product of two $n$-bit integers in $O(n \log n)$ bit operations. This is asymptotically faster than all previous known algorithms, and matches the complexity bound conjectured by Schönhage and Strassen in 1971. In this talk, I will discuss the history of integer multiplication, and give an overview of the new algorithm. No previous background on multiplication algorithms will be assumed.
Knuth showed that a permutation can be sorted by passing it right-to-left through an infinite stack if and only if it \emph{avoids} a certain forbidden sub-pattern (231). Since then, many variations have been studies. I will describe some of these including new work of my PhD student Andrew Goh on stacks in series and ``pop-stacks".
The dimer model is the finite discrete prototype for problems studied by different scientific communities. From the mathematical point of view a simple question arises which is how many dimer configurations are possible in a certain lattice geometry. Typically in the closed-packed arrangement, where the whole lattice space is covered by dimers, different types of dimers organise in a non-homogeneous manner and under certain conditions it results in a separation of phases characterised by distinct patterns of configurations.The formulation of the dimer model as an integrable two-dimensional lattice model of statistical mechanics opens the path to an investigation about the conformal properties of dimers in the continuum scaling limit. The classification of dimers as Gaussian free-field theory or Logarithmic field theory is still being debated for reasons that will be addressed and explained. This is an example of application of conformal invariance to a statistical model at criticality.
We present three bivariate spline approaches to the scattered data problem. The splines are defined as the minimiser of a penalised least squares functional. The penalties are based on partial differentiation operators, and are integrated using the finite element method. We apply these methods to two problems: to remove the mixture of Gaussian and impulsive noise from an image, and to recover a continuous function from a set of noisy observations. Supervisor: Bishnu Lamichhane
I will discuss my Honours work on Stabilisation of Finite Element Schemes for the Stokes Problem. In this work, we use a bi-orthogonal system in our stabilisation term. Supervisor: Bishnu Lamichhane
We investigate the regular action on a regular rooted tree induced by abelian groups satisfying property R_n. From this we construct all abelian groups satisfying property R_n when the number of children is prime. Supervisors: George Willis, Andrew Kepert
An important result of X.-J. Wang states that a convex ancient solution to mean curvature flow either sweeps out all of space or lies in a stationary slab (the region between two fixed parallel hyperplanes). We will describe recent results on the construction and classification of convex ancient solutions and convex translating solutions to mean curvature flow which lie in slab regions, highlighting the connection between the two. Work is joint with Theodora Bourni and Giuseppe Tinaglia.
The honeycomb toroidal graphs are a family of graphs I have been looking at now and then for thirty years. I shall discuss an ongoing project dealing with hamiltonicity as well as some of their properties which have recently interested the computer architecture community.
Finite generalised polygons are the rank 2 irreducible spherical buildings, and include projective planes and the generalised quadrangles, hexagons, and octagons. Since the early work of Ostrom and Wagner on the automorphism groups of finite projective planes, there has been great interest in what the automorphism groups of generalised polygons can be, and in particular, whether it is possible to classify generalised polygons with a prescribed symmetry condition. For example, the finite Moufang polygons are the 'classical' examples by a theorem of Fong and Seitz (1973-1974) (and the infinite examples were classified in the work of Tits and Weiss (2002)). In this talk, we give an overview of some recent results on the study of symmetric finite generalised polygons, and in particular, on the work of the speaker with Cai Heng Li and Eric Swartz.
In this talk I'll describe some recent discoveries about edge-transitive graphs and edge-transitive maps. These are objects that have received relatively little attention compared with their vertex-transitive and arc-transitive siblings.
First I will explain a new approach (taken in joint work with Gabriel Verret) to finding all edge-transitive graphs of small order, using single and double actions of transitive permutation groups. This has resulted in the determination of all edge-transitive graphs of order up to 47 (the best possible just now, because the transitive groups of degree 48 are not known), and bipartite edge-transitive graphs of order up to 63. It also led us to the answer to a 1967 question by Folkman about the valency-to-order ratio for regular graphs that are edge- but not vertex-transitive.
Then I'll describe some recent work on edge-transitive maps, helped along by workshops at Oaxaca and Banff in 2017. I'll explain how such maps fall into 14 natural classes (two of which are the classes of regular and chiral maps), and how graphs in each class may be constructed and analysed. This will include the answers to some 18-year-old questions by Širáň,
Tucker and Watkins about the existence of particular kinds of such maps on orientable and non-orientable surfaces.
We consider an L2-gradient flow of closed planar curves whose corresponding evolution equations is sixth order. Given a smooth initial curve we show that the solution to the flow exists for all time and, provided the length of the evolving curve remains bounded, smoothly converges to a multiply-covered circle. Moreover, we show that curves in any homotopy class with initially small L3‖ks‖2 enjoy a uniform length bound under the flow, yielding the convergence result in these cases. We also give some partial results for figure-8 type solutions to the flow. This is joint work with Ben Andrews, Glen Wheeler and Valentina-Mira Wheeler.
In many engineering problems, physical phenomena could occur at different length and time scales and they are almost impossible to be described by a single mathematical model. More importantly, in such problems, small-scale physical phenomena could dramatically change macroscopic properties of the system. Over last few decades, particle-based methods have become a powerful tool that allows to model concerned physical phenomena at any length and time scales. In this talk, I’ll introduce some widely-used particle-based methods and share some of my experience in development of particle-based mathematical models for engineering problems.
One of CARMA's goals is to foster an environment which provides guidance and support for
what we might call "technical research issues". Broadly, this has meant that CARMA has used
its resources to offer its members technical capabilities which were not readily available
elsewhere, such as collaborative file-sharing, accessible "rich videoconferencing", web site
hosting and web app development, high-performance computing, research software and
visualisation tools like 3-D rendering and 3-D printing. Over the past 10 years, some of
these resources have become available from other sources, including the University of
Newcastle, and for those facilities, CARMA provides guidance about how to access and use
them, as well as for other university systems.
This talk will cover the technical services which CARMA can help you with.
This is a talk for CARMA members, and a light lunch will be served at the start. Please RSVP
for catering purposes to Juliane Turner( Juliane.Turner@newcastle.edu.au).
RHD students are particularly encouraged to attend; please pass this on to your students
if they are not already engaged with CARMA.
This talk will be about the new course in mathematics at the University of Newcastle, MATH2005, Einstein, Bach and the Taj Mahal: Symmetry in the Arts, Sciences and Humanities. The course handbook description is:
Symmetry is an organising principle that plays a role, often unrecognised, in a vast range of disciplines, from mathematics and the physical sciences to music, design and the arts. This course aims to introduce students from a variety of disciplines to symmetry and its consequences. While symmetry is associated with beauty, balance and harmony, it is also associated with conservation, stasis and boredom, and on its own symmetry is not enough to explain the richness, diversity and dynamism of the universe. In contrast, the concept of symmetry breaking is associated with transitions and evolution, and linked to self-organisation, emergent behaviour and the appearance of information.
Beyond what is learnt about symmetry and symmetry breaking in this course, it is hoped that the concepts will challenge and change the thinking of students as they approach future subjects in their own disciplines.
We present an accurate database of diffusion properties of Ni-Zr melts generated within the framework of the molecular-dynamics method in conjunction with a semi-empirical many-body interatomic potential. The reliability of the model description of Ni-Zr melts is confirmed via comparison of our simulation results with the existing experimental data on diffusion properties on Ni-Zr melts. A statistical mechanical formalism is employed to understand the behaviour of the cross-correlation between the interdiffusion flux and the force caused by the difference in the average random accelerations of atoms of different species in the short time limit. Through further investigation of the diffusion dynamics, the most viable composition range for glass formers is identified.
We will discuss what are the patterns that necessarily occur in sets of positive density in homogeneous trees and certain affine buildings. Based on joint work with Michael Bjorklund (Chalmers) and James Parkinson (University of Sydney).
For some time now, I have been trying to understand the intricacy and complexity of integer sequences from a variety of different viewpoints and at least at some level trying to reconcile these viewpoints. However vague that sounds—and it certainly is vague to me—in this talk I hope to explain this sentiment a bit. While a variety of results will be considered, I will focus closely on two examples of wider interest, the Thue—Morse sequence and the set of $k$-free integers.
I offer a leisurely introduction to the 'what', 'why', 'what exactly' and 'how' of my research, which revolves around groups acting on trees. Following a motivation of the subject by situating said groups within the broader theory of all groups, I explain the meaning of 'local' and 'global' in this context. With some examples of groups acting on trees at hand, I illustrate how, in general, 'local' actions have 'global' implications. (Credit to Alejandra Garrido for the title!)
The Great Barrier Reef (GBR) is under threat. After climate change, water quality is recognised as the greatest stressor on the GBR. Sediments eroded from the catchment are transported to the marine environment, leading to poor water quality in the GBR lagoon. Suspended sediment reduces light availability and impedes seagrass growth. Sedimentation can bury coral polyps, cause tissue necrosis, and reduce the recruitment and survival of coral larvae leading to coral reef decline. Sediment can also transport nutrients into the lagoon, potentially leading to eutrophication, algal blooms, and Crown of Thorns Starfish outbreaks. Gullies, particularly in grazing areas, have been identified as leading contributors to sediment reaching the GBR lagoon, despite occupying a small proportion of the landscape. Reducing gully erosion is critical to improving water quality of the GBR, however the current pace of change is insufficient to achieve water quality targets. To guide investment, improved mathematical models of gully erosion are sought that can better assess the efficiency of remediation actions.
Historically, local gully erosion processes have been poorly represented by models. Empirical and conceptual models have been developed to provide guidance on the expected annual average sediment supply from gullied areas, however these are poorly suited to inform interventions or guide investment. In this talk we develop a process-based model to describe the erosion of sediment from an ideal alluvial gully or linear form. We explore this model from the lens of supporting investment decisions to remediate gullied landscapes and demonstrate how the model can be applied in this context.
It will be seen in this talk that certain geometrical theorems may be proved rigorously by checking only three cases. The idea is what Doron Zeilberger calls an 'Ansatz' -- that once we know the general form of a solution we can find the exact solution by checking a few cases. He gives examples where formulae usually established by induction can in fact be proved by checking a small number of cases. We shall do the same for Napoleon's Theorem and also for geometric theorems which don't seem to have been known either to the ancients or to Napoleon Bonaparte.
High-school mathematics only will be assumed. Themes such as computer algebra and notions of proof will be touched on, as will the historical context of ideas such as calculus and complex numbers seen in first-year university mathematics courses.
A talk given at ACSME. Joint work with Joel Black, Naomi Borwein, Florian Breuer, Peter Ellerton, Jo-Ann Larkins and Malcolm Roberts.
Bryan will be talking about his work with Western EcoSystems Technology Inc. in the USA where his work involves projects assessing endangered fish in rivers and fisheries bycatch of marine mammals and birds using change point methods. Sampling designs and analysis for the Alaska Marine Mammal Observer Program will be discussed as will various analyses for the San Luis and Delta-Mendota Water Association and the Metropolitan Water Association of Southern California. Research for the U.S. Army Corps of Engineers involves predicting fish survival rates in dams in the Columbia River Basin using the virtual/paired release method.
This talk will focus on the basic concepts of first-principles molecular dynamics (FPMD) and on some related applications developed within our team at IPCMS in Strasbourg. We are interested in achieving quantitative predictions for materials at the atomic scale by relying on models based on an appropriate account of chemical bonding. This scheme allows for the production of temporal trajectories ensuring the connection between statistical mechanics and macroscopic properties. FPMD lies at the crossroad between molecular dynamics and density functional theory, this latter playing the role of potential energy depending on both the atomic and electronic structure of the system. Examples will be provided for several areas within computational materials science, with special emphasis on disordered materials.
Classic modelling of biological systems assumes the length scale of interaction is far less than the modelling length scale. However, biological interactions can occur over long ranges via mechanisms such as sight and smell. It is possible to capture these interactions using classic conservation laws with a non-local velocity term. In this talk I will present various applications of non-local modelling from the modelling of phagocytosis at a single cell level up to the swarming behaviour of locusts. I will also look at various analysis and simulation techniques needed to approach these problems. Finally, I will present future goals and direction for my work.
Systems with small parameters are often studied using asymptotic techniques. Despite the ubiquity of these techniques, many classical asymptotic methods are unable to capture behaviour that occurs on an exponentially small scale, which lies "beyond all orders" of power series in the small parameter. Typically this does not cause any issues; this behaviour is too small to have a measurable impact on the overall behaviour of the system. I will showcase two systems in which exponentially small contributions have a significant effect on the overall system behaviour.
The first system, which I will discuss in detail, will be nonlinear waves propagating through particle chains with periodic masses. I will show that it is typically possible for Toda and FPUT lattices for certain combinations of parameters - determined by the exponentially small system behaviour - to produce solitary waves that propagate indefinitely. The second system, which I will discuss more briefly, will be the shape of bubbles in a steadily translating Hele-Shaw cell. By studying exponentially small effects, it is possible to construct exotic bubble shapes which correspond to recent laboratory experiments.
Self-similarity (when part of an object is a scaled version of the whole) is one of the most basic forms of symmetry. While known and used since ancient times, its use and investigation took off in the 1980s thanks to the advent of fractals, whose infinite self-similar structure has captured the imagination of mathematicians and lay people alike.
Self-similar fractals are highly symmetrical, so much so that even their symmetry groups exhibit self-similarity. In this talk, I will introduce and discuss groups which are self-similar, or fractal, in an algebraic sense; their connections to fractals, symbolic dynamics and automata theory; how they produce fascinating new examples in group theory, and some research questions in this lively new area.
A non-singular measurable dynamical system is a measure space $X$ whose measure $\mu$ has the property that $\mu $ and $\mu \circ T$ are equivalent measures (in the sense that they have the same sets of measure zero).
Here $T$ is a bimeasurable invertible transformation of $X$. The basic building blocks are the \emph{ergodic} measures.
Von Neumann proposed a classification of non-singular ergodic dynamical systems, and this has been elaborated subsequently by Krieger, Connes and others. This work has deep connections with C*-algebras.
I will describe some work of myself, collaborators and students which explore the classification of dynamical systems from the point of view of measure theory. In particular, we have recently been exploring the notion of critical dimension, a study of the rate of growth of sums of Radon-Nikodym derivatives $\Sigma_{k=1}^n \frac{d\mu \circ T^k}{d\mu}$. Recently, we have been replacing the single transformation $T$ with a group acting on the space $X$.
Let $X$ be the Cantor set and let $g$ be a minimal homeomorphism of $X$ (that is, every orbit is dense). Then the topological full group $\tau[g]$ of $g$ consists of all homeomorphisms $h$ of $X$ that act 'piecewise' as powers of $g$, in other words, $X$ can be partitioned into finitely many clopen pieces $X_1,...,X_n$ such that for each $i$, $h$ acts on $X_i$ as a constant power of $g$. Such groups have attracted considerable interest in dynamical systems and group theory, for instance they characterize the homeomorphism up to flip conjugacy (Giordano--Putnam--Skau) and they provided the first known examples of infinite finitely generated simple amenable groups (Juschenko--Monod). My talk is motivated by the following question: given $h\in\tau[g]$ for some minimal homeomorphism $g$, what can the closures of orbits of $h$ look like?
Certainly $h\in\tau[g]$ is not minimal in general, but it turns out to be quite close to being minimal, in the following sense: there is a decomposition of $X$ into finitely many clopen invariant pieces, such that on each piece $h$ acts a homeomorphism that is either minimal or of finite order. Moreover, on each of the minimal parts of $h$, then either $h$ or $h^{-1}$ has a 'positive drift' with respect to the orbits of $g$; in fact, it can be written in a canonical way as a conjugate of a product of induced transformations (aka first return maps) of $g$.
No background knowledge of topological full groups is required; I will introduce all the necessary concepts in the talk.
I will outline a new theory of fractal tilings. The approach uses graph iterated function systems (IFS) and centers on underlying symbolic shift spaces. These provide a zero dimensional representation of the intricate relationship between shift dynamics on fractals and renormalization dynamics on spaces of tilings. The ideas I will describe unify, simplify, and substantially extend key concepts in foundational papers by Solomyak, Anderson and Putnam, and others. In effect, IFS theory on the one hand, and self-similar tiling theory on the other, are unified.
The work presented is largely new and has not yet been submitted for publication. It is joint work with Andrew Vince (UFL) and Louisa Barnsley. The presentation will include links to detailed notes. The figures illustrate 2d fractal tilings.
By way of recommended background reading I mention the following awardwinning paper: M. F. Barnsley, A. Vince, Self-similar polygonal tilings, Amer. Math. Monthly 124 (1017) 905-921.
Automorphism groups of complexes are a productive area of study not only for studying the structure of complexes but also providing examples of groups of various kinds. Because we are dealing with infinite groups of automorphisms, one important area of research is deriving properties of the automorphism group of a complex from properties of the originating complex.
In this talk, we will discuss three problems related to the theory of convex cones. Namely, i) the isomorphism problem, ii) the homogeneity problem and the iii) self-duality problem. After explaining why one should care about such questions, I will present a few results on those problem. In particular, I will cover recent results on the p-cones and their automorphism groups. This is a joint work with Masaru Ito (Nihon University)
Generalised polygons were first introduced by Jacque Tits in 1959, in the context of studying geometric realisations of the finite simple groups of Lie type. Thus, the study of their symmetry groups and symmetry properties is a rich area of research. My work has focused on studying the point-primitive quadrangles. In my talk I will describe a computer program for testing whether a particular group can act point-primitively on a generalised quadrangle and its application to analysing the almost simple sporadic groups. My work on this program motivated the discovery of a new result dubbed the Line Orbit Lemma, which in turn inspired the conjecturing of the Hemisystem Conjecture, both of which could prove very useful in the analysis of point-primitive quadrangles.
Optimization is often viewed as an active and yet mature research field. However the recent and rapid development in the emerging field of Imaging Sciences has provided a very rich source of new problems as well as big challenges for optimization. Such problems having typically non-smooth and non-convex functionals demand urgent and major improvements on traditional solution methods suitable for convex and differentiable functionals.
This talk presents a limited review of a set of Imaging Models which are investigated by the Liverpool group as well as other groups, out of the huge literature of related works. We start with image restoration models regularised by the total variation and high order regularizers. We then show some results from image registration to align a pair of images which may be in single-modality or multimodality with the latter very much non-trivial. Next we review the variational models for image segmentation. Finally we show some recent attempts to extend our image registration models from more traditional optimization to the Deep Learning framework.
Joint work with recent and current collaborators including D P Zhang, A Theljani, M Roberts, J P Zhang, A Jumaat, T Thompson.
Many interesting objects in the study of the dynamics of complex algebraic varieties are known or conjectured to be transcendental, such as the uniformizing map describing the (complement of a) Julia set, or the Feigenbaum constant. We will discuss various connections between transcendence theory and complex dynamics, focusing on recent developments using transcendence theory to describe the intersection of orbits in algebraic varieties, and the realization of transcendental numbers as measures of dynamical complexity for certain families of maps.
The production of intricate structures at the nanoscale has gone from fantasy to reality in just a few decades. This rapid speed of development in experimental techniques has left a large gap for mathematicians to fill, whether by optimising existing methods or developing predictive tools to lower to cost (both time and money) of experimentation and fabrication. In this seminar I will begin with a background and review including discussing carbon materials and polycyclic aromatic hydrocarbons and their uses; the Lennard-Jones potential and its use as an interatomic potential function to model van der Waals forces and why this potential is relevant to carbon materials; the continuum approximation of the Lennard-Jones potential and its usefulness when modelling intermolecular potentials and lastly an overview of molecular dynamics simulations.
I will then present my preliminary results which begins with a motivating problem of modelling a stack of coronene molecules encapsulated within a single walled carbon nanotube. I will then discuss how we model the system through picking appropriate surfaces and then solving integrals derived from the smaller coronene-coronene and coronene-nanotube interactions, which we then use to build up an analytic expression for the entire system.
Next I will consider the research I will undergo in the near future which includes investigating nonconstant attractive and repulsive coefficients within the Lennard-Jones potential, analysing carbon nanomaterials and other nanostructures use for gas storage, and briefly touch on modelling carbon capture.
Lastly I will go over my research plan including manuscripts I have submitted and aim to submit, conferences and events I have attended and will attend in the coming year, and finally a rough timeframe for the completion of my thesis.
2020 is a Special Year in Mathematics Communication, hosted by the Mathematical Education Research Group in CARMA.
Upcoming events include:
as well as regular seminars during the teaching semesters. Events and seminars will address the increasing importance of mathematics communication for and amongst a wide range of contexts and audiences, including across disciplines and industries, with the general public, and in education from kindergarten to PhD.
Further information and details of events will appear on the MathsComm web page.
Optimisation is a branch of applied mathematics that focuses on using mathematical techniques to optimise complex systems. Real-world optimisation problems are typically enormous in scale, with hundreds of thousands of inter-related variables and constraints, multiple conflicting objectives, and numerous candidate solutions that can easily exceed the total number of atoms in the solar system, overwhelming even the fastest supercomputers. Mathematical optimisation has numerous applications in business and industry, but there is a big mismatch between the optimisation problems studied in academia (which tend to be highly structured problems) and those encountered in practice (which are non-standard, highly unstructured problems). This lecture gives a non-technical overview of the presenter's recent experiences in building optimisation models and practical algorithms in the oil and gas, mining, and agriculture sectors. Some of this practical work has led to academic journal articles, showing that the gap between industry and academia can be overcome.
In number theory, special values coming from arithmetic generating functions always provide information about certain geometric invariants of the corresponding objects. For instance, the logarithmic derivative of the Riemann zeta function at s=0 is equal to the natural log of the length of the unit circle. Moreover, the celebrated Kronecker limit formula expresses the logarithmic derivative of the non-holomorphic Eisenstein series at s=0 in terms of the “periods” of “unit” elliptic curves.
In this talk, I will discuss the classical theory on “Kronecker terms”, and mention a similar phenomenon in the “mix characteristic” settings if time allows.
Profinite groups are the inverse limits of finite groups, or equivalently, the compact totally disconnected groups. First-order logic in the signature of groups can directly talk only about their algebraic structure. We address the question whether a profinite group $G$ can be determined by a single first-order sentence: is there a sentence $\phi$ such that $H \models \phi$ if and only if $H$ is topologically isomorphic to $G$, for each profinite group $H$?
Let $p\ge 3$ be a prime. We show that this property holds for the groups $SL_2(\mathbb Z_p)$ and $PSL_2(\mathbb Z_p)$ where $\mathbb Z_p$ is the ring of $p$-adic integers. If we restrict the reference class to the inverse limits of $p$-groups, we obtain many further examples, e.g.\ all groups with a bound on the dimension of the closed subgroups (such as the abelian group $\mathbb Z_p$).
This is joint work with Dan Segal and Katrin Tent.
A major change in the educational policy landscape in many countries has been the introduction of computing into the school curriculum, either as part of Mathematics or as a separate subject. This has often happened alongside the establishment of ‘Coding’ in out-of school clubs. In this talk, we will reflect on the situation in England where computing has been a compulsory subject since 2014 for all students from age 7 to 16 years. We will describe the research project, UCL ScratchMaths, designed to introduce students, aged 9-11 years, to both core computational and mathematical ideas. We will discuss the findings of the project, the challenges faced in its implementation and the exciting next steps in the computing/mathematics initiative from a more international perspective.
Please visit the lecture's Eventbrite page for more information and to register for this free event.
We prove a local-to-global result for fixed points of groups acting on $2$-dimensional affine buildings (possibly non-discrete, and not of type $\tilde{G}_{2}$). In the discrete case, our theorem establishes two conjectures by Marquis. (joint work with Koen Struyve and Anne Thomas)
A classical result of Siegel asserts that the (2,3,7)-triangle group attains the smallest covolume among lattices of $\mathrm{SL}_2(\mathbb{R})$. In general, given a semisimple Lie group $G$ over some local field $F$, one may ask which lattices in $G$ attain the smallest covolume. A complete answer to this question seems out of reach at the moment; nevertheless, many steps have been made in the last decades. Inspired by Siegel's result, Lubotzky determined that a lattice of minimal covolume in $\mathrm{SL}_2(F)$ with $F=\mathbb{F}_q((t))$ is given by the so-called characteristic $p$ modular group $\mathrm{SL}_2(\mathbb{F}_q[1/t])$. He noted that, in contrast with Siegel’s lattice, the quotient by $\mathrm{SL}_2(\mathbb{F}_q[1/t])$ was not compact, and asked what the typical situation should be: "for a semisimple Lie group over a local field, is a lattice of minimal covolume a cocompact or nonuniform lattice?".
In the talk, we will review some of the known results, and then discuss the case of $\mathrm{SL}_n(\mathbb{R})$ for $n > 2$. It turns out that, up to automorphism, the unique lattice of minimal covolume in $\mathrm{SL}_n(\mathbb{R})$ ($n > 2$) is $\mathrm{SL}_n(\mathbb{Z})$. In particular, it is not uniform, giving a partial answer to Lubotzky’s question in this case.
We define the almost automorphism group of a regular tree, also known as Neretin's group, and determine when two elements are conjugate. (joint work with Gil Goffer)
Whyte introduced translation-like actions of groups as a geometric generalization of subgroup containment. He then proved a geometric reformulation of the von Neumann conjecture by demonstrating that a finitely generated group is non amenable if and only it admits a translation-like action by a non-abelian free group. This provides motivation for the study of what groups can act translation-like on other groups. As a consequence of Gromov’s polynomial growth theorem, virtually nilpotent groups can act translation-like on other nilpotent groups. We demonstrate that if two nilpotent groups have the same growth, but non-isomorphic Carnot completions, then they can't act translation-like on each other. (joint work with David Cohen)
A special public event for Pi Day! Join us at NewSpace for MathsCraft activities to suit all ages from 8 years up, from origami to hyperbolic crocheting!
There will be two public talks:
Please drop by and celebrate pi.
In their celebrated 1993 paper, Brink and Howlett proved that all finitely generated Coxeter groups are automatic. In particular, they constructed a finite state automaton recognising the language of reduced words in a Coxeter group. This automaton is not minimal in general, and recently Christophe Hohlweg, Philippe Nadeau and Nathan Williams stated a conjectural criteria for the minimality. In this talk we will explain these concepts, and outline the proof of the conjecture of Hohlweg, Nadeau, and Williams. We will also describe an alternative algorithm to minimise any finite state automaton recognising the language of reduced words in a Coxeter group, which utilises the associated root system of the group.
This work is joint with James Parkinson.
Free groups, and free products of finite groups, are the easiest non-abelian infinite groups to think about. Yet the automorphism groups of such groups still present significant mysteries. We discuss a program of research concerning automorphisms of easily understood infinite groups.
Bridson, Burillo, Elder and Šunić asked if there exists a group with intermediate geodesic growth and if there is a characterisation of groups with polynomial geodesic growth. Towards these questions, they showed that there is no nilpotent group with intermediate geodesic growth, and they provided a sufficient condition for a virtually abelian group to have polynomial geodesic growth. In this talk, we take the next step in this study and show that the geodesic growth for a finitely generated virtually abelian group is either polynomial or exponential; and that the generating function of this geodesic growth series is holonomic, and rational in the polynomial growth case. To obtain this result, we will make use of the combinatorial properties of the class of linearly constrained language as studied by Massazza. In addition, we show that the language of geodesics of a virtually abelian group is blind multicounter.
Many well-known families of groups and semigroups have natural categorical analogues: e.g., full transformation categories, symmetric inverse categories, as well as categories of partitions, Brauer/Temperley-Lieb diagrams, braids and vines. This talk discusses presentations (by generators and relations) for such categories, utilising additional tensor/monoidal operations. The methods are quite general, and apply to a wide class of (strict) tensor categories with one-sided units.
Dye-Sensitized Solar Cells (DSSCs) have remained a viable source of renewable energy since their introduction in 1991 for their novel choice of materials. In particular, the substitution of a high-purity Silicon semiconductor for a nanoporous Titanium Dioxide greatly lowers production costs. Mathematical modelling for DSSCs must account for the electrochemical nature of DSSCs over the traditional models inherited from Shockley's work in the 1940s. Though the literature has developed a diffusion model for this purpose, there is sparse mathematical treatment in this area. The objective of this thesis is to provide mathematical insight with the goals of increasing our understanding of DSSCs and maximising their efficiency. In addition to providing new analytical solutions for linear diffusion models, we also apply Lie symmetry analysis to the nonlinear diffusion model and develop a new fractional diffusion equation based on subdiffusion equations derived from random-walk simulations.
The construction of compactly supported smooth orthonormal wavelets has been reformulated as feasibility problems. This feasibility approach to wavelet construction has been successful in reproducing Daubechies' wavelets and in building non-separable examples of wavelets on the plane. We discuss the extensions of these constructions to allow for the optimization of wavelets' cardinality and symmetry. We also present relevant optimization techniques that we have developed to solve wavelet feasibility problems. Finally, we tackle the under way application of the feasibility approach to construct compactly supported quaternionic orthonormal wavelets with prescribed regularity.
Hierarchically hyperbolic groups (HHGs) and spaces are recently-introduced generalisations of (Gromov-) hyperbolic groups and spaces. Other examples of HHGs include mapping class groups, right-angled Artin/Coxeter groups, and many groups acting properly and cocompactly on CAT(0) cube complexes. After a substantial introduction and motivation, I will present a combination theorem for hierarchically hyperbolic groups. As a corollary, any graph product of finitely many HHGs is itself a HHG. Joint work with B. Robbio.
The notion of a "hierarchically hyperbolic space/group" grows out of geometric similarities between CAT(0) cubical groups and mapping class groups. Hierarchical hyperbolicity is a "coarse nonpositive curvature" property that is more restrictive than acylindrical hyperbolicity but general enough to include many of the usual suspects in geometric group theory. The class of hierarchically hyperbolic groups is also closed under various procedures for constructing new groups from old, and the theory can be used, for example, to bound the asymptotic dimension and to study quasi-isometric rigidity for various groups. One disadvantage of the theory is that the definition - which is coarse-geometric and just an abstraction of properties of mapping class groups and cube complexes - is complicated. We therefore present a comparatively simple sufficient condition for a group to be hierarchically hyperbolic, in terms of an action on a hyperbolic simplicial complex. I will discuss some applications of this criterion to mapping class groups and (non-right-angled) Artin groups. This is joint work with Jason Behrstock, Alexandre Martin, and Alessandro Sisto.
In this talk I will introduce a new concept in graph theory known as generalized graph truncations. Although graph truncations have appeared throughout history, few papers have studied them and only from quite focused perspectives. Here I will give a general outline of how generalized truncations can be constructed as well as a characterisation of them. I will also outline some results I have had including eulerian truncations, planarity, edge-connectivity, and edge-colourings.
We explain the proof that Neretin groups have no nontrivial ergodic invariant random subgroups (IRS). Equivalently, any non-trivial ergodic p.m.p. action of Neretin’s group is essentially free. This property can be thought of as simplicity in the sense of measurable dynamics; while Neretin groups were known to be abstractly simple by a result of Kapoudjian. The heart of the proof is a “double commutator” lemma for IRSs of elliptic subgroups.
I will begin with the definition of topological full groups and explain various examples of them. The topological full group arising from a minimal homeomorphism on a Cantor set gave the first example of finitely generated simple groups that are amenable and infinite. The topological full groups of one-sided shifts of finite type are viewed as generalization of the Higman-Thompson groups. Based on these two fundamental examples, I will discuss recent development of the study around topological full groups.
We show that for almost all primitive integral cohomology classes in the fibered cone of a closed fibered hyperbolic 3-manifold, the monodromy normally generates the mapping class group of the fiber. The key idea of the proof is to use Fried’s theory of suspension flow and dynamic blow-up of Mosher. If the time permits, we also discuss the non-existence of the analog of Fried’s continuous extension of the normalized entropy over the fibered face in the case of asymptotic translation lengths on the curve complex. This talk is based on joint work with Eiko Kin, Hyunshik Shin and Chenxi Wu.
A sequence of expanders is a family of finite graphs that are sparse yet highly connected. Such families of graphs are fundamental object that found a wealth of applications throughout mathematics and computer science. This talk is centred around an "asymptotic" weakening of the notion of expansion. The original motivation for this asymptotic notion comes from the study of operator algebras associated with metric spaces. Further motivation comes from some recent works which established a connection between asymptotic expansion and strongly ergodic actions. I will give a non-technical introduction to this topic, highlighting the relations with usual expanders and group actions.
In 1967 Richard Thompson introduced the group $F$, hoping that it was non-amenable, since then it would disprove the von Neumann conjecture. Though the conjecture has subsequently been disproved, the question of the amenability of Thompson's group F has still not been rigorously settled. In this talk I will present the most comprehensive numerical attack on this problem that has yet been mounted. I will first give a history of the problem, including mention of the many incorrect "proofs" of amenability or non-amenability. Then I will give details of a new, efficient algorithm for obtaining terms of the co-growth sequence. Finally I will describe a number of numerical methods to analyse the co-growth sequences of a number of infinite, finitely-generated groups, and show how these methods provide compelling evidence (though of course not a proof) that Thompson's group F is not amenable. I will also describe an alternative route to a rigorous proof. (This is joint work with Andrew Elvey Price).
The theory of EG, the class of elementary amenable groups, has developed steadily since the class was introduced constructively by Day in 1957. At that time, it was unclear whether or not EG was equal to the class AG of all amenable groups. Highlights of this development certainly include Chou's article in 1980 which develops much of the basic structure theory of the class EG, and Grigorchuk's 1985 result showing that the first Grigorchuk group $\Gamma$ is amenable but not elementary amenable. In this talk we report on work where we demonstrate the existence of a family of finitely generated subgroups of Richard Thompson’s group $F$ which is strictly well-ordered by the embeddability relation in type $\varepsilon_{0}+1$. All except the maximum element of this family (which is $F$ itself) are elementary amenable groups. In this way, for each $\alpha<\varepsilon_{0}$, we obtain a finitely generated elementary amenable subgroup of F whose EA-class is $\alpha+2$. The talk will be pitched for an algebraically inclined audience, but little background knowledge will be assumed. Joint work with Matthew Brin and Justin Moore.
A non-compact, compactly generated, locally compact group whose proper quotients are all compact is called just-non-compact. Discrete just-non-compact groups are John Wilson’s famous just infinite groups. In this talk, I’ll describe an ongoing project to use permutation groups to better understand the class of just-non-compact groups that are totally disconnected. An important step for this project has recently been completed: there is now a structure theorem for non-compact tdlc groups G that have a compact open subgroup that is maximal. Using this structure theorem, together with Cheryl Praeger and Csaba Schneider’s recent work on homogeneous cartesian decompositions, one can deduce a neat test for whether the monolith of such a group G is a one-ended group in the class S of nondiscrete, topologically simple, compactly generated, tdlc groups. This class S plays a fundamental role in the structure theory of compactly generated tdlc groups, and few types of groups in S are known.
We show that many 2-dimensional Artin groups are residually finite. This includes Artin groups on three generators with labels at least 3, where either at least one label is even, or at most one label is equal 3. The result relies on decomposition of these Artin groups as graphs of finite rank free groups.
I will describe a new proof, joint with Adam Piggott (UQ), that groups presented by finite convergent length-reducing rewriting systems where each rule has left-hand side of length 3 are exactly the plain groups (free products of finite and infinite cyclic groups). Our proof relies on a new result about properties of embedded circuits in geodetic graphs, which may be of independent interest in graph theory.
A finite graph that can be obtained from a given graph by contracting edges and removing vertices and edges is said to be a minor of this graph. Minors have played an important role in graph theory, ever since the well-known result of Kuratowski that characterised planar graphs as those that do not admit the complete graph on 5 vertices nor the complete bipartite graph on (3,3) vertices as minors. In this talk, we will explore how this concept interacts with some notions from geometric group theory, and describe a new characterisation of virtually free groups in terms of minors of their Cayley graphs.
A graph is vertex-transitive if its group of automorphism acts transitively on its vertices. A very important concept in the study of these graphs is that of local action, that is, the permutation group induced by a vertex-stabiliser on the corresponding neighbourhood. I will explain some of its importance and discuss some attempts to generalise it to the case of directed graphs.
The concept of a synchronising permutation group was introduced nearly 15 years ago as a possible way of approaching The \v{C}ern\'y Conjecture. Such groups must be primitive. In an attempt to understand synchronising groups, a whole hierarchy of properties for a permutation group has been developed, namely, 2-transitive groups, $\mathbb{Q}$I-groups, spreading, separating, synchronsing, almost synchronising and primitive. Many surprising connections with other areas of mathematics such as finite geometry, graph theory, and design theory have arisen in the study of these properties. In this survey talk I will give an overview of the hierarchy and discuss what is known about which groups lie where.
We give a survey of recent results exploring connections between the Higman-Thompson groups and their automorphism groups and the group of autmorphisms of the shift dynamical system. Our survey takes us from dynamical systems to group theory via groups of homeomorphisms with a segue through combinatorics, in particular, de Bruijn graphs.
Answer: Only when it's an ample group in the sense of Krieger (in particular, discrete, countable and locally finite) and has a Bratteli diagram satisfying certain conditions.
Complaint: Wait, isn't Neretin's group a non-discrete, locally compact, topological full group?
Retort: It is, but you need to use the correct topology!
A fleshed-out version of the above conversation will be given in the talk. Based on joint work with Colin Reid.
There has been a long interest in embedding and non-embedding results for groups in the Thompson family. One way to get at results of this form is to classify maximal subgroups. In this talk, we will define certain labelings of binary trees and use them to produce a large family of new maximal subgroups of Thompson's group V. We also relate them to a conjecture about Thompson's group T.
This is joint, ongoing work with Jim Belk, Collin Bleak, and Martyn Quick at the University of Saint Andrews.
We consider irrational slope versions of T and V. We give infinite presentations for these groups and show how they can be represented by tree-pair diagrams. We also show that they have index-2 normal subgroups that are simple.
This is joint work with Brita Nucinkis and Pep Burillo.
In the 1950's and 60's, the field of general relativity was revolutionised by the introduction of advanced mathematical analysis, in particular through the work of Choquet-Bruhat and Penrose. This revolution put (relativistic) astrophysics and cosmology on a firm mathematical foundation and culminated in definitive theoretical evidence for the formation of "singularities" in stellar collapse and the beginning of (the current phase of?) the universe. I will present an introduction to these advances and some of the mathematics behind them. The talk is aimed at "the lay mathematician".
Topological full groups of minimal subshifts are an important source of exotic examples in geometric group theory, as well as being powerful invariants of symbolic dynamical systems. In 2011, Grigorchuk and Medynets proved that TFGs are LEF, that is, every finite subset of the multiplication table occurs in the multiplication table of some finite group. In this talk we explore some ways in which asymptotic properties of the finite groups which occur reflect asymptotic properties of the associated subshift. Joint work with Daniele Dona.
It is well-known that the Galois group of an (infinite) algebraic field extension is a profinite group. When the extension is transcendental, the automorphism group is no longer compact, but has a totally disconnected locally compact structure (TDLC for short). The study of TDLC groups was initiated by van Dantzig in 1936 and then restarted by Willis in 1994. In this talk some of Willis' concepts, such as tidy subgroups, the scale function, flat subgroups and directions are introduced and applied to examples of automorphism groups of transcendental field extensions. It remains unknown whether there exist conditions that a TDLC group must satisfy to be a Galois group. A suggestion of such a condition is made.
I will show how to construct field extensions with Galois groups isomorphic to general linear groups (with entries in various rings and fields) from the torsion of elliptic curves and Drinfeld modules. No prior knowledge of these structures is assumed.
In this talk, we will be interested in measure-preserving actions of countable groups on standard probability spaces, and more precisely in the partitions of the space into orbits that they induce, also called measure-preserving equivalence relations. In 2000, Gaboriau obtained a characterization of the ergodic equivalence relations which come from non-free actions of the free group on $n>1$ generators: these are exactly the equivalence relations of cost less than n. A natural question is: how non-free can these actions be made, and what does the action on each orbit look like? We will obtain a satisfactory answer by showing that the action on each orbit can be made totipotent, which roughly means "as rich as possible", and furthermore that the free group can be made dense in the ambient full group of the equivalence relation. This is joint work with Alessandro Carderi and Damien Gaboriau.
My recent work has involved taking questions asked for finite groups and considering them for infinite groups. There are various natural directions with this. In finite group theory, there exist many beautiful results regarding generation properties. One such notion is that of spread, and Scott Harper and Casey Donoven have raised several intriguing questions for spread for infinite groups (in https://arxiv.org/abs/1907.05498). A group $G$ has spread $k$ if for every $g_1,\ldots,g_k$ we can find an $h$ in $G$ such that $\langle g_i, h\rangle=G$. For any group we can say that if it has a proper quotient that is non-cyclic, then it has spread $0$. In the finite world there is then the astounding result - which is the work of many authors - that this condition on proper quotients is not just a necessary condition for positive spread, but is also a sufficient one. Harper-Donoven’s first question is therefore: is this the case for infinite groups? Well, no. But that’s for the trivial reason that we have infinite simple groups that are not 2-generated (and they point out that 3-generated examples are also known). But if we restrict ourselves to 2-generated groups, what happens? In this talk we’ll see the answer to this question. The arguments will be concrete (*) and accessible to a general audience.
(*) at the risk of ruining the punchline, we will find a 2-generated group that has every proper quotient cyclic but that has spread zero.
Let $G$ be a group and $S$ a generating set. Then the group $G$ naturally acts on the Cayley graph $\mathrm{Cay}(G,S)$ by left multiplications. The group $G$ is said to be rigid if there exists an $S$ such that the only automorphisms of $\mathrm{Cay}(G,S)$ are the ones coming from the action of $G$. While the classification of finite rigid groups was achieved in 1981, few results were known about infinite groups. In a recent work, with M. de la Salle we gave a complete classification of infinite finitely generated rigid groups. As a consequence, we also obtain that every finitely generated group admits a Cayley graph with countable automorphism group.
Kaplansky made various related conjectures about group rings, especially for torsion-free groups. For example, the zero divisors conjecture predicts that if $K$ is a field and $G$ is a torsion-free group, then the group ring $K[G]$ has no zero divisors. I will survey what is known about the conjectures, including their relationships to each other and to other group properties such as orderability, and present some recent progress.
The inaugural CARMA Colloquium for 2021.
Join via Zoom, or join us in person (max room capacity is 9 people).
This talk is based on my upcoming book chapter of the same name, to appear in Springer handbook of the mathematics of the arts and sciences. I will sample the myriad of ways in which mathematical experiment advances my research. The examples are problems I solved with techniques inspired by the methodology and writings of the late Jonathan Borwein. I will emphasize the tools, strategies, and the broader arc that research follows: from low-dimensional, specific, and visible, to high-dimensional and general. Topics include dynamical systems, geometry, optimization, error bounds, random walks, special functions, and number theory. Because I am mainly interested in tools and creative thinking, rather than specific theory, this talk should be accessible to a wide audience.
It is a long standing question whether a group of type $F$ that does not contain Baumslag–Solitar subgroups is necessarily hyperbolic. One-relator groups are of type $F$ and Louder and Wilton showed that if the defining relator has imprimitivity rank greater than $2$, they do not contain Baumslag-Solitar subgroups, so they conjecture that such groups are hyperbolic. Cashen and I verified the conjecture computationally for relators of length at most $17$. In this talk I'll introduce hyperbolic groups and the imprimitivity rank of elements in a free group. I’ll also discuss how to verify hyperbolicity using versions of combinatorial curvature on van Kampen diagrams.
I will address two problems about recognising surface groups. The first one is the classical problem of classifying Poincaré duality groups in dimension two. I will present a new approach to this, joint with Peter Kropholler. The second problem is about recognising surface groups among one-relator groups. Here I will present a new partial result, joint with Giles Gardam and Alan Logan.
First, I will give a brief introduction to the Riemann zeta-function ζ(s) and its connection with prime numbers. In particular, I will mention the famous “explicit formula” that gives an explicit connection between Chebyshev’s prime-counting function ψ(x) and an infinite sum that involves the zeros of ζ(s). Using the explicit formula, many questions about prime numbers can be reduced to questions about these zeros or sums over the zeros.
Motivated by such results, in the second half of the talk I will consider sums of the form ∑φ(γ), where φ is a function satisfying mild smoothness and monotonicity conditions, and γ ranges over the ordinates of nontrivial zeros ρ = β + iγ of ζ(s), with γ restricted to be in a given interval. I will show how the numerical estimation of such sums can be accelerated, and give some numerical examples.
The new results are joint work with Dave Platt (Bristol) and Tim Trudgian (UNSW). For preprints, see arXiv:2009.05251 and arXiv:2009.13791.
Join via Zoom, or join us in person (max room capacity is 9 people).
3:30pm for pre-talk drinks + snacks, and 4pm for the talk
You can watch a video version at https://youtu.be/0rE-EopdSyQ instead, or in addition!
A full paper describing this talk can be found at https://arxiv.org/abs/2008.01812. Mathieu functions of period π or 2π, also called elliptic cylinder functions, were introduced in 1868 by Émile Mathieu together with so-called modified Mathieu functions, in order to help understand the vibrations of an elastic membrane set in a fixed elliptical hoop. These functions still occur frequently in applications today: our interest, for instance, was stimulated by a problem of pulsatile blood flow in a blood vessel compressed into an elliptical cross-section. This talk surveys and recapitulates some of the historical development of the theory and methods of computation for Mathieu functions and modified Mathieu functions and identifies some gaps in current software capability, particularly to do with double eigenvalues of the Mathieu equation. We demonstrate how to compute Puiseux expansions of the Mathieu eigenvalues about such double eigenvalues, and give methods to compute the generalized eigenfunctions that arise there. In examining Mathieu's original contribution, we bring out that his use of anti-secularity predates that of Lindstedt. For interest, we also provide short biographies of some of the major mathematical researchers involved in the history of the Mathieu functions: Émile Mathieu, Sir Edmund Whittaker, Edward Ince, and Gertrude Blanch.
he Post Correspondence Problem (PCP) is a classical problem in computer science that can be stated as: is it decidable whether given two morphisms $g$ and $h$ between two free semigroups $A$ and $B$, there is any nontrivial $x$ in $A$ such that $g(x)=h(x)$? This question can be phrased in terms of equalisers, asked in the context of free groups, and expanded: if the `equaliser' of $g$ and $h$ is defined to be the subgroup consisting of all $x$ where $g(x)=h(x)$, it is natural to wonder not only whether the equaliser is trivial, but what its rank or basis might be. While the PCP for semigroups is famously insoluble and acts as a source of undecidability in many areas of computer science, the PCP for free groups is open, as are the related questions about rank, basis, or further generalisations. However, in this talk we will show that there are links and surprising equivalences between these problems in free groups, and classes of maps for which we can give complete answers. This is joint work with Alan Logan.
Join via Zoom, or join us in person (max room capacity is 9 people).
3:30pm for pre-talk drinks + snacks, and 4pm for the talk
Plane partitions are a two-dimensional analogue of integer partitions introduced by MacMahon in the 1890s. Various generating functions for plane partitions admit beautiful product forms, displaying an unexpected connection to the representation theory of classical groups and Lie algebras. Cylindric partitions, defined by Gessel and Krattenthaler in the 1990s, are an affine analogue of plane partitions.
In this talk I will explain what cylindric partitions are, discuss their connection with the representation theory of infinite dimensional Lie algebras, and describe some recent results on Rogers--Ramanujan-type identities arising from the study of cylindric partitions. No knowledge of representation theory will be assumed in this talk.
Fixed and moving boundary problems for the one-dimensional heat equation are considered. A unified approach to solving such problems is proposed by embedding a given initial boundary value problem into an appropriate initial value problem on the real line with arbitrary but given functions, whose solution is known. These arbitrary functions are determined by imposing that the solution of the initial value problem satisfies the given boundary conditions. Exact analytical solutions of some moving boundary problems that have not been previously obtained are provided. Moreover, examples of fixed boundary problems over semi-infinite and bounded intervals are given, thus providing an alternative approach to the usual methods of solution.
Join via Zoom, or join us in person (max room capacity is 9 people).
3:30pm for pre-talk drinks + snacks, and 4pm for the talk
In this presentation, I will give an overview of the Ubiratan D'Ambrosio's concept of ethnomathematics and Elder Albert Marshal's concept of “two-eyed seeing.” I will address some of the dynamics between these two concepts and illustrate them with several examples that will include a brief analysis of geometry evident in a traditional Haida hat currently on display at the SFU Museum of Anthropology and the work of contemporary Salish artist Dylan Thomas.
A positive cone on a group $G$ is a subsemigroup $P$, such that $G$ is the disjoint union of $P$, $P^{-1}$ and the trivial element. Positive cones codify naturally $G$-left-invariant total orders on $G$. When $G$ is a finitely generated group, we will discuss whether or not a positive cone can be described by a regular language over the generators and how the ambient geometry of $G$ influences the geometry of a positive cone. This will be based on joint works with Juan Alonso, Joaquin Brum, Cristobal Rivas and Hang Lu Su.
Being of type $FP_2$ is an algebraic shadow of being finitely presented. A long standing question was whether these two classes are equivalent. This was shown to be false in the work of Bestvina and Brady. More recently, there are many new examples of groups of type $FP_2$ coming with various interesting properties. I will begin with an introduction to the finiteness property $FP_2$. I will end by giving a construction to find groups that are of type $FP_2(F)$ for all fields $F$ but not $FP_2(Z)$.
Joint presentation with AustMS/AMSI lunchtime seminar series
Join via Zoom
As educators, we need to assess our students for a variety of reasons from the mundane requirement to submit ranked scores to the arcane desire to encourage and track learning. I have developed my approach to oral examinations for undergraduate students in an attempt to support collaborative analysis of my student's understanding, as well as an opportunity for growth and discovery right up to the final moments of a course. I will present my experiences using oral assessments both alone and in combination with written work in multivariable calculus with mid-level students, in partial differential equations with upper-level students, and in introductory calculus with first-year students.
If $(K,f)$ is a difference field, and $a$ is a finite tuple in some difference field extending $K$, and such that $f(a)$ in $K(a)^{\mathrm{alg}}$, then we define $dd(a/K)=lim[K(f^k(a),a):K(a)]^{1/k}$, the distant degree of $a$ over $K$. This is an invariant of the difference field extension $K(a)^{\mathrm{alg}}/K$. We show that there is some $b$ in the difference field generated by $a$ over $K$, which is equi-algebraic with $a$ over $K$, and such that $dd(a/K)=[K(f(b),b):K(b)]$, i.e.: for every $k>0$, $f(b)$ in $K(b,f^k(b))$. Viewing $Aut(K(a)^{\mathrm{alg}}/K)$ as a locally compact group, this result is connected to results of Goerge Willis on scales of automorphisms of locally compact totally disconnected groups. I will explicit the correspondence between the two sets of results. (Joint with E. Hrushovski)
How difficult is it to solve a given computational problem? In a large class of computational problems, including the fixed-template Constraint Satisfaction Problems (CSPs), this fundamental question has a simple and beautiful answer: the more symmetrical the problem is, the easier is to solve it. The tight connection between the complexity of a CSP and a certain concept that captures its symmetry has fueled much of the progress in the area in the last 20 years. I will talk about this connection and some of the many tools that have been used to analyze the symmetries. The tools involve rather diverse areas of mathematics including algebra, analysis, combinatorics, logic, probability, and topology.
Join via Zoom, or join us in person (max room capacity is 9 people).
3:30pm for pre-talk drinks + snacks, and 4pm for the talk
This discussion will revolve around Michael Donovan’s experience as a Fulbright Visiting Scholar in 2019-2020 and the important role of being an ambassador for Australia and Fulbright’s international relationship building. This talk will feature an aspect of the Fulbright application process called the personal statement. The personal statement is not about your academic standing or research tasks but a process to allow the Fulbright review committee to see who you are. It may appear to be a small additional element, but it highlights you and how you can fit within the Fulbright values as a ‘cultural ambassador’ for yourself, your institution, your research and the Fulbright program.
Join via Zoom, or join us in person (max room capacity is 9 people).
3:30pm for pre-talk drinks + snacks, and 4pm for the talk
I will first introduce the idea of integer relations and discuss practical concerns for their computation by numeric techniques (i.e., using floating point arithmetic). To this end I will discuss the PSLQ and LLL algorithms (and will mention, in passing, some others). I will then extend the idea of integer relations into the relations consisting of algebraic integers. I will discuss, in particular, the case of algebraic integers from quadratic extension fields. As with the first part of the talk, I will discuss practical concerns for calculation by numeric techniques of these quadratic integer relations.
The classical result, due to Jordan, Burnside, Dickson, says that every normal subgroup of $GL(n, K)$ ($K$ - a field, $n \geq 3$) which is not contained in the center, contains $SL(n, K)$. A. Rosenberg gave description of normal subgroups of $GL(V)$, where $V$ is a vector space of any infinite cardinality dimension over a division ring. However, when he considers subgroups of the direct product of the center and the group of linear transformations $g$ such that $g-id_V$ has finite dimensional range the proof is not complete. We fill this gap for countably dimensional $V$ giving description of the lattice of normal subgroups in the group of infinite column-finite matrices indexed by positive integers over any field. Similar results for Lie algebras of matrices will be surveyed. The is based on results presented in https://arxiv.org/abs/1808.06873 and https://arxiv.org/abs/1806.01099. (joint work with Martyna Maciaszczyk and Sebastian Zurek.)
Highly transitive groups, i.e. groups admitting an embedding in $\mathrm{Sym}(\mathbb{N}) with dense image, form a wide class of groups. For instance, M. Hull and D. Osin proved that it contains all countable acylindrically hyperbolic groups with trivial finite radical. After an introduction to high transitiviy, I will present a theorem (from joint work with P. Fima, F. Le Maître and S. Moon) showing that many groups acting on trees are highly transitive. On the one hand, this theorem gives new examples of highly transitive groups. On the other hand, it is sharp because of results by A. Le Boudec and N. Matte Bon.
The Euler-Poincaré characteristic of a discrete group is an important (but also quite mysterious) invariant. It is usually just an integer or a rational number and reflects many quite significant properties. The realm of totally disconnected locally compact groups admits an analogue of the Euler-Poincaré characteristic which surprisingly is no longer just an integer, or a rational number, but a rational multiple of a Haar measure. Warning: in order to gain such an invariant the group has to be unimodular and satisfy some cohomological finiteness conditions. Examples of groups satisfying these additional conditions are the fundamental groups of finite trees of profinite groups. What arouses our curiosity is the fact that - in some cases - the Euler-Poincaré characteristic turns out to be miraculously related to a zeta-function. A large part of the talk will be devoted to the introduction of the just-cited objects. We aim at concluding the presentation by facing the concrete example of the group of F-points of a split semisimple simply connected algebraic group $G$ over $F$ (where $F$ denotes a non-archimedean locally compact field of residue characteristic $p$). Joint work with Gianmarco Chinello and Thomas Weigel.
Join via Zoom, or join us in person (max room capacity is 9 people).
3:30pm for pre-talk drinks + snacks, and 4pm for the talk
Note that the speaker will be presenting on a whiteboard. We will transmit by Zoom, but the view will probably be better in person.
The talk will review the history of parameterized complexity theory and its steady trajectory towards real-world accountability in the design and analysis of algorithms. The original motivation for the definition of the central concept of fixed parameter tractability (FPT) came from the graph minors project of Robertson and Seymour, with its key results on:
The functor project explores how these can be widely generalized and used as complexity classification tools on a number of different levels, and how this approach may provide an opening for an "encountered-instance" (as contrasted with "worst–case") framework for complexity analysis and algorithm design.
We provide a new axiomatic framework, inspired by the work of Ol'shanskii, to describe explicitly certain irreducible unitary representations of second-countable non-discrete unimodular totally disconnected locally compact groups. We show that this setup applies to various families of automorphism groups of locally finite semiregular trees and right-angled buildings. The talk is based on material presented in arxiv.org/abs/2106.05730.
In the 90's, Nebbia conjectured that a group of tree automorphisms acting transitively on the tree's boundary must be of type I, that is, its unitary representations can in principal be classified. For key examples, such as Burger-Mozes groups, this conjecture is verified. Aiming for a better understanding of Nebbia's conjecture and a better understanding of representation theory of groups acting on trees, it is natural to ask whether there is a characterisation of type I groups acting on trees. In 2016, we introduced in collaboration with Cyril Houdayer a refinement of Nebbia's conjecture to a trichotomy, opposing type I groups with groups whose von Neumann algebra is non-amenable. For large classes of groups, including Burger-Mozes groups, we could verify this trichotomy. In this talk, I will motivate and introduce the conjecture trichotomy for groups acting on tress and explain how von Neumann algebraic techniques enter the picture.
In 1993 Brink and Howlett proved that finitely generated Coxeter groups are automatic. In particular, they constructed a finite state automaton recognising the language of reduced words in the Coxeter group. This automaton is constructed in terms of the remarkable set of "elementary roots" in the associated root system. In this talk we outline the construction of Brink and Howlett. We also describe the minimal automaton recognising the language of reduced words, and prove necessary and sufficient conditions for the Brink-Howlett automaton to coincide with this minimal automaton. This resolves a conjecture of Hohlweg, Nadeau, and Williams, and is joint work with Yeeka Yau.
Join via Zoom, or join us in person (max room capacity is 9 people).
3:30pm for pre-talk drinks + snacks, and 4pm for the talk
Geospatial artificial intelligence (geoAI) is an emerging scientific discipline that combines innovations in spatial science, artificial intelligence methods in machine learning and high-performance computing to extract knowledge from spatial big data. In this talk, I will discuss potential applications for environmental epidemiology, including the ability to incorporate large amounts of big spatial and temporal data in a variety of formats; computational efficiency; flexibility in algorithms and workflows to accommodate relevant characteristics of spatial (environmental) processes including spatial nonstationary; and scalability to model other environmental exposures across different geographic areas.
The contraction subgroup for $x$ in the locally compact group, $G$, $\mathrm{con}(x) = \left\{ g\in G \mid x^ngx^{-n} \to 1\text{ as }n\to\infty \right\}$, and the Levi subgroup is $\mathrm{lev}(x) = \left\{ g\in G \mid \{x^ngx^{-n}\}_{n\in\mathbb{Z}} \text{ has compact closure}\right\}$. The following will be shown.
Let $G$ be a totally disconnected, locally compact group and $x\in G$. Let $y\in{\sf lev}(x)$. Then there are $x'\in G$ and a compact subgroup, $K\leq G$ such that:
-$K$ is normalised by $x'$ and $y$,
-$\mathrm{con}(x') = \mathrm{con}(x)$ and $\mathrm{lev}(x') = \mathrm{lev}(x)$ and
-the group $\langle x',y,K\rangle$ is abelian modulo $K$, and hence flat.
If no compact open subgroup of $G$ normalised by $x$ and no compact open subgroup of $\mathrm{lev}(x)$ normalised by $y$, then the flat-rank of $\langle x',y,K\rangle$ is equal to $2$.
Space weather events are associated with strong concentrations of magnetic field on the surface of the Sun. Currently, only low-fidelity space-weather forecasts are possible. Predicting the emergence time and size of the Sun’s active regions would be a significant step forward for space weather forecasting. The physics behind how these active regions emerge from the interior to the surface of the Sun is poorly understood. Only since the advent of a space-based monitoring campaign has it been possible to capture the emergence process of the magnetic field, Doppler velocity and intensity continuum of hundreds of active regions. By measuring the average motion of the polarities, surface velocities and pattern of convection, it is clear that convection plays an important role in the emergence of magnetic field on the Sun. Using machine learning and sophisticated simulations we aim to identify the convection cell dynamics associated with the emergence process, moving towards improved space weather forecasting.
Many astrophysical and laboratory plasmas have a very high conductivity. In the limit of infinite conductivity the magnetic field is “frozen” to the plasma, and consequently the magnetic field topology (defined by the connectivity and linkage of the field lines) is preserved. A breakdown in this “ideal” behaviour permits “magnetic reconnection” to occur, which is behind explosive energy release processes such as solar flare and sawtooth crashes in tokamaks. There exist analogous processes in high Reynolds number fluids and superfluids. Here I will develop the mathematical basis for understanding these concepts. I will also provide some illustrative examples of the importance of characterising magnetic complexity in determining where and how magnetic reconnection occurs.
Register now at Eventbrite
The conference website is here.
This national and international two-day symposium will address the pressing challenge of how to Indigenise mathematical practice at Universities, both in education and research. The methodology is of collaboration and sharing of knowledge and worldviews from within both Indigenous cultures and the cultures of mathematics and its allied disciplines.
The symposium will be organised around a collection of interconnected themes, each chaired by a partnership of Indigenous and non-Indigenous practitioners.
The physical location of this blended face to face and online symposium is significant. The Birabahn building of the Wollotuka Institute blends indoor and outdoor spaces, inviting new perspectives, whilst also having the capabilities for an international video-linked conference.
Speakers:
The notion of amenable actions by discrete groups on $C^{\ast}$-algebras has been introduced by Claire Amantharaman-Delaroche more than thirty years ago, and has become a well understood theory with many applications. So it is somewhat surprising that an established theory of amenable actions by general locally compact groups has been missed for a very long time. We now present a theory which extends the discrete case and unifies several notions of approximation properties of actions which have been discussed in the literature. We also discuss the weak containment problem which asks wether an action $\alpha:G\to\mathrm{Aut}(A)$ is amenable if and only if the maximal and reduced crossed products coincide. In this lecture we report on joint work with Alcides Buss and Rufus Willett.
For a non-amenable group $G$, there can be many group $C^{\ast}$-algebras that lie naturally between the universal and the reduced $C^{\ast}$-algebra of $G$. These are called exotic group $C^{\ast}$-algebras. After a short introduction, I will explain that if $G$ is a simple Lie group or an appropriate locally compact group acting on a tree, the $L^p$-integrability properties of different spherical functions on $G$ (relative to a maximal compact subgroup) can be used to distinguish between exotic group $C^{\ast}$-algebras. This recovers results of Samei and Wiersma. Additionally, I will explain that under certain natural assumptions, the aforementioned exotic group $C^{\ast}$-algebras are the only ones coming from $G$-invariant ideals in the Fourier-Stieltjes algebra of $G$.
This is based on joint work with Dennis Heinig and Timo Siebenand.
We are all surrounded by data and the easiest way to understand what it is trying to tell us is to draw a picture that helps summarise them in some way. Often, standard visual techniques (like histograms and bar charts) are used, and used well but sometimes they aren’t. With the advent of new technologies that are now widely accessible to industry and the general public, there have been many attempts at proposing new ways of visualising data. Some of them have been very helpful while others have been less than useful. In this talk I shall present an overview of good, bad, and plain ugly attempts to visualise data in the news and social media and explain what makes them so. There will be zero maths involved, but plenty of pretty (and not so pretty) pictures . . . in colour.
We prove that for each finite index subgroup $H$ of the mapping class group of a closed hyperbolic surface, and for each real number $r>0$ there does not exist a faithful $C^{1+r}$--action of $H$ on a circle. (Joint with Thomas Koberda and Cristobal Rivas)
The famous Lehmer problem asks whether there is a gap between 1 and the Mahler measure of algebraic integers which are not roots of unity. Asked in 1933, this deep question concerning number theory has since then been connected to several other subjects. After introducing the concepts involved, we will briefly describe a few of these connections with the theory of linear groups. Then, we will discuss the equivalence of a weak form of the Lehmer conjecture and the "uniform discreteness" of cocompact lattices in semisimple Lie groups (conjectured by Margulis). Joint work with Lam Pham.
Artin groups emerged from the study of braid groups and complex hyperplane arrangements. Artin groups have very simple presentation, yet rather mysterious geometry with many basic questions widely open. I will present a way of understanding certain Artin groups and Garside groups by building geometric models on which they act. These geometric models are non-positively curved in an appropriate sense, and such curvature structure yields several new results on the algorithmic, topological and geometric aspects of these groups. No previous knowledge on Artin groups or Garside groups is required. This is joint work with D. Osajda.
Let $G$ be the fundamental group of a closed orientable surface of genus at least 2, and $\alpha$ an automorphism of $G$. In a celebrated result, Thurston showed that the mapping torus $G \rtimes_{\alpha} \mathbb{Z}$ is hyperbolic if and only if no power of $\alpha$ preserves a non-trivial conjugacy class. In this talk, I will describe joint work with François Dahmani, where we show that if $G$ is torsion-free hyperbolic, then $G \rtimes_{\alpha} \mathbb{Z}$ is relatively hyperbolic with ``optimal'' parabolic subgroups.
Buildings have been introduced by Tits in order to study semi-simple algebraic groups from a geometrical point of view. One of the most important results in the theory of buildings is the classification of thick irreducible spherical buildings of rank at least 3. In particular, any such building comes from an RGD-system. The decisive tool in this classification is the Extension theorem for spherical buildings, i.e. a local isometry extends to the whole building.
Twin buildings were introduced by Ronan and Tits in the late 1980s. Their definition was motivated by the theory of Kac-Moody groups over fields. Each such group acts naturally on a pair of buildings and the action preserves an opposition relation between the chambers of the two buildings. This opposition relation shares many important properties with the opposition relation on the chambers of a spherical building. Thus, twin buildings appear to be natural generalizations of spherical buildings with infinite Weyl group. Since the notion of RGD-systems exists not only in the spherical case, one can ask whether any twin building (satisfying some further conditions) comes from an RGD-system. In 1992 Tits proves several results that are inspired by his strategy in the spherical case and he discusses several obstacles for obtaining a similar Extension theorem for twin buildings. In this talk I will speak about the history and developments of the Extension theorem for twin buildings.
We provide a unified combinatorial framework to study orbits in affine flag varieties via the associated Bruhat-Tits buildings. We first formulate, for arbitrary affine buildings, the notion of a chimney retraction. This simultaneously generalises the two well-known notions of retractions in affine buildings: retractions from chambers at infinity and retractions from alcoves. We then present a recursive formula for computing the images of certain minimal galleries in the building under chimney retractions, using purely combinatorial tools associated to the underlying affine Weyl group. Finally, for Bruhat-Tits buildings, we relate these retractions and their effect on certain minimal galleries to double coset intersections in the corresponding affine flag variety. This is joint work with Elizabeth Milicevic, Yusra Naqvi and Petra Schwer.
Group of automorphisms of a connected locally finite graph is naturally a totally disconnected locally compact topological group, when equipped with the permutation topology. It therefore makes sense to ask for which graphs is the topology not discrete. We show that in case of Cayley graphs of Coxeter groups, one can fully characterise the discrete ones in terms of the symmetries of the corresponding Coxeter system. Joint work with Federico Berlai.
I will give an overview of a programme investigating projective embeddings of (exceptional) geometries which Hendrik Van Maldeghem and I started in 2010.
The geometry of elements fixed by an automorphism of a spherical building is a rich and well-studied object, intimately connected to the theory of Galois descent in buildings. In recent years, a complementary theory has emerged investigating the geometry of elements mapped onto opposite elements by a given automorphism. In this talk we will give an overview of this theory. This work is joint primarily with Hendrik Van Maldeghem (along with others).
Supply chain management comprises many vehicle routing and scheduling problems across different time and geographical scales. In this talk, we discuss a supply chain management problem spanning a large geographical area that integrates customer clustering, transshipment and local transportation. While much research has been performed on each component of our proposed problem, there is currently no established technique for the integration of transshipment with local transportation. This talk will present an iterative, large-neighbourhood search heuristic to find high quality solutions to the integrated transshipment and local transportation problem. We will describe the numerous techniques necessary to diversify the search and solve large-scale supply chain management problems. Our proposed heuristic, based on a multi-armed bandit algorithm, is able to find high-quality solutions for the integrated problem that significantly reduce costs compared to solving each problem sequentially.
Times are CST, Canada (These dates will be 29th of November to 2nd of December for Australian time zones) Note dates have changed due to a clash with WIPCE
This will be an International event that is mainly online. Conference Website
Chaired by Associate Professor Edward Doolittle
Leonhard Euler (1707-1783) was not only one of the greatest mathematicians, he was probably also the most prolific and most influential mathematician in history. In this talk I will give a biographical sketch, and will try to put the various stages of Euler's life and career into a historical and academic perspective. I will say very little about Euler's mathematics.
Speaker Biography: Karl Dilcher did his graduate studies at Queen’s University in Kingston, Ontario, Canada, where he finished his Ph.D. in 1983. He is currently a professor at Dalhousie University in Halifax, Nova Scotia, Canada. He first arrived there in 1984 as a postdoctoral fellow, co-supervised by the late Jon Borwein. It was this connection which brought him for visits to the University of Newcastle in 2013 and 2015, where he also participated in CARMA events. His research interests include classical analysis, special functions, and elementary and combinatorial number theory.
The Peach Stone Bowl Game is a game played during the Midwinter Ceremonies of the Rotinonhsonni (Iroquois) people of Ontario, Quebec, and New York State. The design of the Peach Stone Bowl Game is some interest to mathematicians, in particular the design issues related to the fairness of the game and the expected stopping time of the game. In this talk, Dr Edward Doolittle will present the game and analysis of a simplified version of the game using Markov theory and computer simulations to support the assertion that the game was carefully designed through experimental means in order to meet the ceremonial and social requirements of the game. The work is joint with his graduate student, Mr Layne Burns from James Smith Cree Nation in Saskatchewan.
Speaker Biography: Associate Professor Edward Doolittle is Kanyen’kehake (Mohawk) from Six Nations in southern Ontario. He earned a PhD in pure mathematics (partial differential equations) from the University of Toronto in 1997. From then until 2001 he worked for Queen’s University’s Aboriginal Teacher Education Program, helping to administer the program and teaching Indigenous Mathematics Education, and from 2000 to 2001 he studied the Mohawk language in immersion with Onkwewenna Kentsyohkwa (Our Language Group) on Six Nations. From 2001 he has been on the faculty of First Nations University and the University of Regina, currently as Associate Professor of Mathematics and Associate Dean, Research and Graduate Studies. He is a Fellow of the Canadian Mathematical Society (CMS), a recipient of the Adrien Pouliot Award from the CMS, and a recipient of a Governor General’s Gold Medal from the Governor General of Canada.