Automaton semigroups are a natural generalisation of the automaton groups introduced by Grigorchuk and others in the 1980s as examples of groups having various 'exotic' properties. In this talk I will give a brief introduction to automaton semigroups, and then discuss recent joint work with Alan Cain on the extent to which the class of automaton semigroups is closed under certain semigroup constructions (free products and wreath products).
Fundamental questions in basic and applied ecology alike involve complex adaptive systems, in which localized interactions among individual agents give rise to emergent patterns that feed back to affect individual behavior. In such systems, a central challenge is to scale from the "microscopic" to the "macroscopic", in order to understand the emergence of collective phenomena, the potential for critical transitions, and the ecological and evolutionary conflicts between levels of organization. This lecture will explore some specific examples, from universality in bacterial pattern formation to collective motion and collective decision-making in animal groups. It also will suggest that studies of emergence, scaling and critical transitions in physical systems can inform the analysis of similar phenomena in ecological systems, while raising new challenges for theory.
Professor Levin received his B.A. from Johns Hopkins University and his Ph.D. in mathematics from the University of Maryland. At Cornell University 1965-1992 , he was Chair of the Section of Ecology and Systematics, and then Director of the Ecosystems Research Center, the Center for Environmental Research and the Program on Theoretical and Computational Biology, as well as Charles A. Alexander Professor of Biological Sciences (1985-1992). Since 1992, he has been at Princeton University, where he is currently George M. Moffett Professor of Biology and Director of the Center for BioComplexity. He retains an Adjunct Professorship at Cornell.
His research interests are in understanding how macroscopic patterns and processes are maintained at the level of ecosystems and the biosphere, in terms of ecological and evolutionary mechanisms that operate primarily at the level of organisms; in infectious diseases; and in the interface between basic and applied ecology.
Simon Levin visits Australia for the first in the Maths of Planet Earth Simons Public Lecture Series. http://mathsofplanetearth.org.au/events/simons/
Let $s_q(n)$ be the sum of the $q$-ary digits of $n$. For example $s_{10}(1729) = 1 + 7 + 2 + 9 = 19$. It is known what $s_q(n)$ looks like "on average". It can be shown that $s_q(n^h)$ looks $h$ times bigger "on average". This raises the question: is the ratio of these two things $h$ on average? In this talk we will give some history on the sum of digits function, and will give a proof of one of Stolarsky's conjecture concerning the minimal values of the ratio of $s_q(n)$ and $s_q(n^h)$.
Three ideas --- active sets, steepest descent, and smooth approximations of functions --- permeate nonsmooth optimization. I will give a fresh perspective on these concepts, and illustrate how many results in these areas can be strengthened in the semi-algebraic setting. This is joint work with A.D. Ioffe (Technion), A.S. Lewis (Cornell), and M. Larsson (EPFL).
After Gromov's work in the 1980s, the modern approach to studying infinite groups is from the geometric point of view, seeing them as metric spaces and using geometric concepts. One of these is the concept of distortion of a subgroup in a group. Here we will give the definition and some examples of distorted and nondistorted subgroups and some recent results on them. The main tools used to establish these results are quasi-metrics or metric estimates, which are quantities which differ from the distance by a multiplicative constant, but which still capture the concept enough to understand distortion.
The joint spectral radius of a finite set of real $d \times d$ matrices is defined to be the maximum possible exponential rate of growth of long products of matrices drawn from that set. A set of matrices is said to have the finiteness property if there exists a periodic product which achieves this maximal rate of growth. J. C. Lagarias and Y. Wang conjectured in 1995 that every finite set of real $d \times d$ matrices satisfies the finiteness property. However, T. Bousch and J. Mairesse proved in 2002 that counterexamples to the finiteness conjecture exist, showing in particular that there exists a family of pairs of $2 \times 2$ matrices which contains a counterexample. Similar results were subsequently given by V. D. Blondel, J. Theys and A. A. Vladimirov and by V. S. Kozyakin, but no explicit counterexample to the finiteness conjecture was given. This talk will discuss an explicit counter-example to this conjecture.
In 1997, Kaneko introduced the poly-Bernoulli number. Poly-Euler numbers are introduced as a generalization of the Euler numbers in a manner similar to the introduction of the poly-Bernoulli numbers. In my talk, some properties of poly-Euler numbers, for example, explicit formulas, sign change, Clausen-von Staudt type formula, combinatorial interpretations and so on are showed.
This research is a joint work with Yasuo Ohno.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
We prove that if $q\ne0,\pm1$ and $\ell\ge1$ are fixed integers, then the numbers $$ 1, \quad \sum_{n=1}^\infty\frac{1}{q^n-1}, \quad \sum_{n=1}^\infty\frac{1}{q^{n^2}-1}, \quad \dots, \quad \sum_{n=1}^\infty\frac{1}{q^{n^\ell}-1} $$ are linearly independent over $\mathbb{Q}$. This generalizes a result of Erdős who treated the case $\ell=1$. The method is based on the original approaches of Chowla and Erdős, together with some results about primes in arithmetic progressions with large moduli of Ahlford, Granville and Pomerance.
This is joint work with Yohei Tachiya.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
The desire to understand $\pi$, the challenge, and originally the need, to calculate ever more accurate values of $\pi$, the ratio of the circumference of a circle to its diameter, has captured mathematicians - great and less great - for many many centuries. And, especially recently, $\pi$ has provided compelling examples of computational mathematics. $\pi$, uniquely in mathematics, is pervasive in popular culture and the popular imagination. In this lecture I shall intersperse a largely chronological account of $\pi$'s mathematical and numerical status with examples of its ubiquity. It is truly a number for Planet Earth.
I am grateful to have been appointed in a role with a particular focus on First Year Teaching as well as a research mandate. The prospect of trying to do both well is daunting but exciting. I have begun talking with some of my colleagues who are in somewhat similar roles in other Universities in Australia and overseas about what they do. I would like to share what I've learnt, as well as some of my thoughts so far about how this new role might evolve. I am also very interested in input from the Maths discipline or indeed any of my colleagues as to what you think is important and how this role can benefit the maths discipline and our school.
Reaction-diffusion processes occur in many materials with microstructure such as biological cells, steel or concrete. The main difficulty in modelling and simulating accurately such processes is to account for the fine microstructure of the material. One method of upscaling multi-scale problems, which has proven reliable for obtaining feasible macroscopic models, is the method of periodic homogenisation.
The talk will give an introduction to multi-scale modelling of chemical mechanisms in domains with microstructure as well as to the method of periodic homogenisation. Moreover, a few aspects of solving the resulting systems of equations numerically will also be discussed.
I will survey what is known and some of the open questions.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
I will survey what is known and some of the open questions.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
We discuss some recently discovered relations between L-values of modular forms and integrals involving the complete elliptic integral K. Gentle and illustrative examples will be given. Such relations also lead to closed forms of previously intractable integrals and (chemical) lattice sums.
I will survey what is known and some of the open questions.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
I will survey what is known and some of the open questions.
Modern mathematics suffers from subtle but serious logical problems connected with the widespread use of infinite sets and the non-computational aspects of real numbers. The result is an ever-widening gap between the theories of pure mathematics and the computations available to computer scientists.
In this talk we discuss a new approach to mathematics that aims to remove many of the logical difficulties by returning our focus to the all important aspect of the rational numbers and polynomial arithmetic. The key is rational trigonometry, which shows how to rethink the fundamentals of trigonometry and metrical geometry in a purely algebraic way, opens the door to more general non-Euclidean geometries, and has numerous concrete applications for computer scientists interested in graphics and robotics.
The classical prolate spheroidal wavefunctions (prolates) arise when solving the Helmholtz equation by separation of variables in prolate spheroidal coordinates. They interpolate between Legendre polynomials and Hermite functions. In a beautiful series of papers published in the Bell Labs Technical Journal in the 1960's, they were rediscovered by Landau, Slepian and Pollak in connection with the spectral concentration problem. After years spent out of the limelight while wavelets drew the focus of mathematicians, physicists and electrical engineers, the popularity of the prolates has recently surged through their appearance in certain communication technologies. In this talk we outline some developments in the sampling theory of bandlimited signals that employ the prolates, and the construction of bandpass prolate functions.
This is joint work with Joe Lakey (New Mexico State University)
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
We introduce and study a new dual condition which characterizes zero duality gap in nonsmooth convex optimization. We prove that our condition is weaker than all existing constraint qualifications, including the closed epigraph condition. Our dual condition was inspired by, and is weaker than, the so-called Bertsekas’ condition for monotropic programming problems. We give several corollaries of our result and special cases as applications. We pay special attention to the polyhedral and sublinear cases, and their implications in convex optimization.
This research is a joint work with Jonathan M. Borwein and Liangjin Yao.
Vulnerability is the resistance of a network after any disruptions in its links or nodes. Since any network can be modelled by a graph, many vulnerability measures were defined to observe the resistance of networks. For this purpose vulnerability measures such as connectivity,integrity, toughness etc., have been studied widely over all vertices of a graph. In recent many researches began to study on vulnerability measures on graphs over vertices or edges which have a special property rather than over all vertices of the graph.
Independent domination, connected domination and total domination measures are examples of such these measures. Total Accessibility number of a graph is defined as a new measure by choosing the accessible sets $S \subset V$ which have a special property accesibility. Total Accessibility number of a graph G is based on the accessibility number of a graph. The subsets S are accessible sets of the graph. Accessibility number of any connected graph G is a concept based on neighborhood relation between any two vertices by using another vertex connected to both these two vertices.
Graph automatic groups are an extension of the notion of an automatic group, introduced by Kharlampovich, Khoussainov and Miasnikov in 2011, with the intention to capture a wider class of groups while preserving computational properties such as having quadratic time word problem. We extend the notion further by replacing regular with more general language classes. We prove that nonsolvable Baumslag-Solitar groups are (context free)-graph automatic, (context sensitive)-graph automatic implies a context-sensitive word problem and conversely groups with context sensitive word problem are (context sensitive)-automatic. Finally an obstruction to (context sensitive)-graph automatic implying polynomial time word problem is given.
This is joint work with Jennifer Taback, Bowdoin College.
Spatial patterns of events that occur on a network of lines, such as traffic accidents recorded on a street network, present many challenges to a statistician. How do we know whether a particular stretch of road is a "black spot", with a higher-than-average risk of accidents? How do we know which aspects of road design affect accident risk? These important questions cannot be answered satisfactorily using current techniques for spatial analysis. The core problem is that we need to take account of the geometry of the road network. Standard methods for spatial analysis assume that "space" is homogeneous; they are inappropriate for point patterns on a linear network, and give fallacious results. To make progress, we must abandon some of the most cherished assumptions of spatial statistics, with far-reaching implications for statistical methodology.
The talk will describe the first few steps towards a new methodology for analysing point patterns on a linear network. Ingredients include stochastic processes, discrete graph theory and classical partial differential equations as well as statistical methodology. Examples come from ecology, criminology and neuroscience.
In this talk we introduce a Douglas-Rachford inspired projection algorithm, the cyclic Douglas-Rachford iteration scheme. We show, unlike the classical Douglas-Rachford scheme, that the method can be applied directly to convex feasibility problems in Hilbert space without recourse to a product space formulation. Initial results, from numerical experiments comparing our methods to the classical Douglas-Rachford scheme, are promising.
This is joint work with Prof. Jonathan Borwein.
We will discuss the substantial mathematical, computational, historical and philosophical aspects of this celebrated and controversial theorem. Much of this talk should be accessible to undergraduates, but we will also discuss some of the crucial details of the actual revision by Robertson, Sanders, Seymour and Thomas of the original Appel-Haken computer proof. We will additionally cover recent new computer proofs by Gonthier, and by Steinberger, and also the generalisations of the theorem by Hajos and Hadwiger which are currently still open. New software developed by the speaker will be used to visually illustrate many of the subtle points involved, and we will examine the air of controversy that still surrounds existing computer proofs. Finally, the prospect of a human proof will be canvased.
ABOUT THE SPEAKER: Mr Michael Reynolds has a Masters degree in Maths and an extensive experience in Software Industry. He is currently doing his PhD in Graph Theory at the University of Newcastle.
In response to a recent report from Australia's Chief Scientist (Prof Ian Chubb), the Australian government recently sought applications from consortia of universities (and other interested parties) interested in developing pre-service programs that will improve the quality of mathematics and science school teachers. In particular, the programs should:
At UoN, a group of us from Education and MAPS produced the outline of a vision for our own BTeach/BMath program which builds on local strengths. In the context of very tight timelines, this became a part of an application together with five other universities. In this seminar we will outline the vision that we produced, and invite further contributions and participation, with a view to improving the BMath/BTeach program regardless of the outcome of the application of which we are a part.
We continue on the Probabilistic Method, looking at Chapter 4 of Alon and Spencer. We will consider the second moment, the Chebyshev's inequality, Markov's inequality and Chernoff's inequality.
Our most recent computations tell us that any counterexample to Giuga’s 1950 primality conjecture must have at least 19,907 digits. Equivalently, any number which is both a Giuga and a Carmichael number must have at least 19,907 digits. This bound has not been achieved through exhaustive testing of all numbers with up to 19,907 digits, but rather through exploitation of the properties of Giuga and Carmichael numbers. We introduce the conjecture and an algorithm for finding lower bounds to a counterexample, then present our recent results and discuss challenges to further computation.
Network infrastructures are a common phenomenon. Network upgrades and expansions typically occur over time due to budget constraints. We introduce a class of incremental network design problems that allow investigation of many of the key issues related to the choice and timing of infrastructure expansions and their impact on the costs of the activities performed on that infrastructure. We examine three variants: incremental network design with shortest paths, incremental network design with maximum flows, and incremental design with minimum spanning trees. We investigate their computational complexity, we analyse the performance of natural heuristics, we derive approximation algorithms and we study integer program formulations.
Degree/diameter problem in graph theory is a theoretical problem which has applications in network design. The problem is to find the maximum possible number of nodes in a network with the limitations on the number of links attached to any node and also the limitation on the largest number of links that should be traversed when a message is sent from one node to another inside the network. An upper bound, known as the Moore bound, is given to this problem. The graphs that obtain the bound are called Moore graphs.
In this talk we give an overview of the existing Moore graphs and we discuss the existence of a Moore graph of degree 57 with diameter 2 which has been an open problem for more than 50 years.
In this talk, we study the rate of convergence of the cyclic projection algorithm applied to finitely many semi-algebraic convex sets. We establish an explicit convergence rate estimate which relies on the maximum degree of the polynomials that generate the semi-algebraic convex sets and the dimension of the underlying space. We achieve our results by exploiting the algebraic structure of the semi-algebraic convex sets.
This is the joint work with Jon Borwein and Guoyin Li.
W. T. Tutte published a paper in 1963 entitled "How to Draw a Graph". Tutte's motivation was mathematical, and his paper can be seen as a contribution to the long tradition of geometric representations of combinatorial objects.
Over the following 40-odd years, the motivation for creating visual representations of graphs has changed from mathematical curiosity to visual analytics. Current demand for graph drawing methods is now high, because of the potential for more human-comprehensible visual forms in industries as diverse as biotechnology, homeland security and sensor networks. Many new methods have been proposed, tested, implemented, and found their way into commercial tools. This paper describes two strands of this history: the force directed approach, and the planarity approach. Both approaches originate in Tutte's paper.
Further, we demonstrate number of methods for graph visualization that can be derived from the weighted version of Tutte's method. These include results on clustered planar graphs, edge-disjoint paths, an animation method, interactions such as adding/deleting vertices/edges and a focus-plus-context view method.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
Presenters: Judy-anne Osborn, Ben Brawn, Mick Gladys.
Eric Mazur is a Harvard physicist who has become known for the strategies that he introduced for teaching large first year service (physics) classes, in such a way that seems to improve the students' conceptual understanding of the material whilst not hurting their exam performance. The implementation of the ideas include the use of clicker-like technology (Mick Gladys will talk about his own implementation using mobile phones) as well as lower tech card-based analogues. We will screen a Youtube video showing Professor Mazur explain his ideas, and then describe how we have adapted some of them in maths and physics.
In trajectory optimization, the optimal path of a flight system or a group of flight systems is searched for, often in an interplanetary setting: we are in search of trajectories for one or more spacecrafts. On the one hand, this is a well-developed field of research, in which commercial software packages are already available for various scenarios. On the other hand, the computation of such trajectories can be rather demanding, especially when low-thrust missions with long travel times (e.g., years) are considered. Such missions invariably involve gravitational slingshot maneuvers at various celestial bodies in order to save propellant or time. Such maneuvers involve vastly different time scales: years of coasting can be followed by course corrections on a daily basis. In this talk, we give an overview over trajectory optimization for space vehicles and highlight some recent algorithmic developments.
You are invited to a celebration of the 21st anniversary of the Factoring Lemma. This lemma was the key to solving some long-standing open problems, and was the starting point of an investigation of totally disconnected, locally compact groups that has ensued over the last 20 years. In this talk, the life of the lemma will described from its conception through to a very recent strengthening of it. It will be described at a technical level, as well as viewed through its relationships with topology, geometry, combinatorics, algebra, linear algebra and research grants.
A birthday cake will be served afterwards.
Please make donations to the Mathematics Prize Fund in lieu of gifts.
Given a set T of the Euclidean space, whose elements are called sites, and a particular site s, the Voronoi cell of s is the set formed by all points closer to s than to any other site. The Voronoi diagram of T is the family of Voronoi cells of all the elements of T. In this talk we show some applications of the Voronoi diagrams of finite and infinite sets and analyze direct and inverse problems concerning the cells. We also discuss the stability of the cells under different types of perturbations and the effect of assigning weights to the sites.
Geodesic metric spaces provide a setting in which we can develop much of nonlinear, and in particular convex, analysis in the absence of any natural linear structure. For instance, in a state space it often makes sense to speak of the distance between two states, or even a chain of connecting intermediate states, whereas the addition of two states makes no sense at all.
We will survey the basic theory of geodesic metric spaces, and in particular Gromov's so called CAT($\kappa$) spaces. And if there is time (otherwise in a later talk), we will examine some recent results concerning alternating projection type methods, principally the Douglas--Rachford algorithm, for solving the two set feasibility problem in such spaces.
In a recent referee report, the referee said he/she could not understand the proofs of either of the two main results. Come and judge for yourself! This is joint work with Darryn Bryant and Don Kreher.
Complex (and Dynamical) Systems
A Data-Based View of Our World
Population censuses and the human face of Australia
Scientific Data Mining
Earth System Modeling
Mitigating Natural Disaster Risk
Sustainability – Environmental modelling
BioInvasion and BioSecurity
Realising Our Subsurface Potential
Abstract submission closes 31st May, 2013.
For more information, visit the conference website.
Roughly speaking, an automorphism $a$ of a graph $G$ is geometric if there is a drawing $D$ of $G$ such that $a$ induces a symmetry of $D$; if $D$ is planar then a is planar. In this talk we discuss geometric and planar automorphisms. In particular we sketch a linear time algorithm for finding a planar drawing of a planar graph with maximum symmetry.
We show that a combination of two simple preprocessing steps would generally improve the conditioning of a homogeneous system of linear inequalities. Our approach is based on a comparison among three different notions of condition numbers for linear inequalities.
The talk is based on a joint work with Javier Peña and Negar Soheili (Carnegie-Mellon University).
Overview of Course Content>
The classical regularity theory is centred around the implicit and Lyusternik-Graves theorems, on the one hand, and the Sard theorem and transversality theory, on the other. The theory (and a number of its applications to various problems of variational analysis) to be discussed in the course deals with similar problems for non-differentiable and set-valued mappings. This theory grew out of demands that came from needs of (mainly) optimization theory and subsequent understanding that some key ideas of the classical theory can be naturally expressed in purely metric terms without mention of any linear and/or differentiable structures.
Topics to be covered
The "theory" part of the course consists of five sections:
Formally, for understanding of the course basic knowledge of functional analysis plus some acquaintance with convex analysis and nonlinear analysis in Banach spaces (e.g. Frechet and Gateaux derivatives, implicit function theorem) will be sufficient. Understanding of the interplay between analytic and geometric concepts would be very helpful.
I will explain how the probabalistic method can be used to obtain lower bounds for the Hadamard maximal determinant problem, and outline how the Lovasz local lemma (Alon and Spencer, Corollary 5.1.2) can be used to improve the lower bounds.
This is a continuation of last semester's lectures on the probabilistic method, but is intended to be self-contained.
The finite element method has become the most powerful approach in approximating solutions of partial differential equations arising in modern engineering and physical applications. We present some efficient finite element methods for Reissner-Mindlin, biharmonic and thin plate equations.
In the first part of the talk I present some applied partial differential equations, and introduce the finite element method using the biharmonic equation. In the second part of the talk I will discuss about the finite element method for Reissner-Mindlin, biharmonic and thin plate spline equations in a unified framework.
Yes! Finally there is some discrete maths in the high school curriculum! Well, perhaps.
In this talk I will go over the inclusion of discrete mathematics content in the new national curriculum, the existing plans for its implementation, what this will mean for high school teachers, and brainstorm ideas for helping out, if they need our help. I will also talk about "This is Megamathematics" and perhaps, if we have time, we can play a little bit with "Electracity".
This talk deals with problems that are asymptotically related to best-packing and best-covering. In particular, we discuss how to efficiently generate N points on a d-dimensional manifold that have the desirable qualities of well-separation and optimal order covering radius, while asymptotically having a prescribed distribution. Even for certain small numbers of points like N=5, optimal arrangements with regard to energy and polarization can be a challenging problem.
I will report on work I performed with Jim Zhu over the past three years on how to exploit different forms of symmetry in variational analysis. Various open problems will be flagged.
This talk is available at http://carma.newcastle.edu.au/jon/symva-talk.pdf and the related paper is at http://carma.newcastle.edu.au/jon/symmetry.pdf. It has recently appeared in Advances in Nonlinear Analysis.
I will discuss Symmetric criticality and the Mountain pass lemma. I will provide the needed background for anyone who did not come to Part 1.
This talk is available at http://carma.newcastle.edu.au/jon/symva-talk.pdf and the related paper is at http://carma.newcastle.edu.au/jon/symmetry.pdf. It has recently appeared in Advances in Nonlinear Analysis.
Do you every wonder what goes on behind the closed doors of some of your professors? Or colleagues? What kind of stuff can I do for my Honours degree? Or my RHD studies? Well, let these wonders cease!
This sequence of talks will expose the greatest (mathematical) desires of mathematicians at Newcastle, highlighting several areas of current research from the purest of the pure to the most applicable of the applied. Talks will aim to be accessible to undergraduates (mostly), or anyone with a desire to learn more mathematics.
Program:The feasibility problem associated with nonempty closed
convex sets $A$ and $B$ is to find some $x\in A \cap B$.
Projection algorithms in general aim to compute such a point.
These algorithms play key roles in optimization and have many applications outside mathematics - for example in medical imaging.
Until recently convergence results were only available in the setting of linear spaces (more particularly, Hilbert spaces) and where the two sets are closed and convex.
The extension into geodesic metric spaces allows their use in spaces where there is no natural linear structure, which is the case for instance in tree spaces, state spaces, phylogenomics and configuration spaces for robotic movements.
After reviewing the pertinent aspects of CAT(0) spaces introduced in Part I, including results for von Neumann's alternating projection method, we will focus on the Douglas-Rachford algorithm, in CAT(0) spaces. Two situations arise; spaces with constant curvature and those with non-constant curvature. A prototypical space of the later kind will be introduced and the behavior of the Douglas-Rachford algorithm within it examined.
Do you every wonder what goes on behind the closed doors of some of your professors? Or colleagues? What kind of stuff can I do for my Honours degree? Or my RHD studies? Well, let these wonders cease!
This sequence of talks will expose the greatest (mathematical) desires of mathematicians at Newcastle, highlighting several areas of current research from the purest of the pure to the most applicable of the applied. Talks will aim to be accessible to undergraduates (mostly), or anyone with a desire to learn more mathematics.
Programme:The talk will be about new results on modular forms obtained by the speaker in collaboration with Shaun Cooper.
Universities are facing a tumultuous time with external regulation through TEQSA and the rise of MOOCs (Massive Open Online Courses). Disciplines within universities face the challenge of doing research, as well as producing a range of graduates capable of undertaking diverse careers. These are not new challenges. The emergence of MOOCs has raised the question, 'Why go to a University?' These tumultuous times provide a threat as well as an opportunity. How do we balance our activities? Does teaching and learning need to be re-conceptuliased? Is it time to seriously consider the role of education and the 'value-add' university education provides? This talk will provide snapshots of work that demonstrate the value-add universities do provide. Evidence is used to challenge current understandings and to chart a way forward.
The aim of this Douglas-Rachford brainstorming session to discuss:
-New applications and large scale experiments
-Diagnosing and profiling successful non-convex applications
-New conjectures
-Anything else you may think is relevant
Let spt(n) denote the number of smallest parts in the partitions of n. In 2008, Andrews found surprising congruences for the spt-function mod 5, 7 and 13. We discuss new congruences for spt(n) mod powers of 2. We give new generating function identities for the spt-function and Dyson's rank function. Recently with Andrews and Liang we found a spt-crank function that explains Andrews spt-congruences mod 5 and 7. We extend these results by finding spt-cranks for various overpartition-spt-functions of Ahlgren, Bringmann, Lovejoy and Osburn. This most recent work is joint with Chris Jennings-Shaffer.
Image processing research is dominated, to a considerable degree, by linear-additive models of images. For example, wavelet decompositions are very popular both with experimentalists and theoreticians primarily because of their neatly convergent properties. Fourier and orthogonal series decompositions are also popular in applications, as well as playing an important part in the analysis of wavelet methods.
Multiplicative decomposition, on the other hand, has had very little use in image processing. In 1-D signal processing and communication theory it has played a vital part (amplitude, phase, and frequency modulations of communications theory especially).
In many cases 2-D multiplicative decompositions have just been too hard to formulate or expand. Insurmountable problems (divergences) often occur as the subtle consequences of unconscious errors in the choice of mathematical structure. In my work over the last 17 years I've seen how to overcome some of the problems in 2-D, and the concept of phase is a central, recurring theme. But there is still so much more to be done in 2-D and higher dimensions.
This talk will be a whirlwind tour of some main ideas and applications of phase in imaging.
(Joint work with Konrad Engel and Martin Savelsbergh)
In an incremental network design problem we want to expand an existing network over several time periods, and we are interested in some quality measure for all the intermediate stages of the expansion process. In this talk, we look at the following simple variant: In each time period, we are allowed to add a single edge, the cost of a network is the weight of a minimum spanning tree, and the objective is to minimize the sum of the costs over all time periods. We describe a greedy algorithm for this problem and sketch a proof of the fact that it provides an optimal solution. We also indicate that incremental versions of other basic network optimization problems (shortest path and maximum flow) are NP-hard.
In his deathbed letter to G.H. Hardy, Ramanujan gave a vague definition of a mock modular function: at each root of unity its asymptotics matches the one of a modular form, though a choice of the modular function depends on the root of unity. Recently Folsom, Ono and Rhoades have proved an elegant result about the match for a general family related to Dyson’s rank (mock theta) function and the Andrews—Garvan crank (modular) function. In my talk I will outline some heuristics and elementary ingredients of the proof.
Joint work with David Wood (Monash University, Australia) and Eran Nevo (Ben-Gurion University of the Negev, Israel).
The maximum number of vertices of a graph of maximum degree $\Delta\ge 3$ and diameter $k\ge 2$ is upper bounded by $\Delta^{k}$. If we restrict our graphs to certain classes, better upper bounds are known. For instance, for the class of trees there is an upper bound of $2\Delta^{\lfloor k/2\rfloor}$. The main result of this paper is that, for large $\Delta$, graphs embedded in surfaces of bounded Euler genus $g$ behave like trees. Specifically, we show that, for large $\Delta$, such graphs have orders bounded from above by
\begin{cases} (c_0g+c_1)\Delta^{\lfloor k/2\rfloor} & \text{if $k$ is even}\\
(c_0g^2+c_1)\Delta^{\lfloor k/2\rfloor} & \text{if $k$ is odd}
\end{cases}
where $c_0,c_1$ are absolute constants.
With respect to lower bounds, we construct graphs of Euler genus $g$, odd diameter and orders $(c_0\sqrt{g}+c_1)\Delta^{\lfloor k/2\rfloor}$, for absolute constants $c_0,c_1$.
Our results answer in the negative a conjecture by Miller and Širáň (2005). Before this paper, there were constructions of graphs of Euler genus $g$ and orders $c_0\Delta^{\lfloor k/2\rfloor}$ for an absolute constant $c_0$. Also, Šiagiová and Simanjuntak (2004) provided an upper bound of $(c_0g+c_1)k\Delta^{\lfloor k/2\rfloor}$ with absolute constants $c_0,c_1$.
I will talk about the metrical theory of Diophantine approximation associated with linear forms that are simultaneously small in terms of absolute value rather than the classical nearest integer norm. In other words, we consider linear forms which are simultaneously close to the origin. A complete Khintchine-Groshev type theorem for monotonic approximating functions is established within the absolute value setup. Furthermore, the Hausdorff measure generalization of the Khintchine-Groshev type theorem is obtained. As a consequence we obtain the complete Hausdorff dimension theory. Staying within the absolute value setup, we prove that the corresponding set of badly approximable vectors is of full dimension.
The degree/diameter problem is to find the largest possible order of a graph (or digraph) with given maximum degree (or maximum out-degree) and given diameter. This is one of the unsolved problems in Extremal Graph Theory. Since the general problem is difficult many variations of the problem have been considered, including bipartite, vertex-transitive, mixed, planar, etc.
This talk is part of a series started in May. The provisional schedule is
Random matrix theory has undergone significant theoretical progress in the last two decades, including proofs on universal behaviour of eigenvalues as the matrix dimension becomes large, and a deep connection between algebraic manipulations of random matrices and free probability theory. Underlying many of the analytical advances are tools from complex analysis. By developing numerical versions of these tools, it is now possible to calculate random matrix statistics to high accuracy, leading to new conjectures on the behaviour of random matrices. We overview recent advances in this direction.
An exact bucket indexed (BI) mixed integer linear programming formulation for nonpreemptive single machine scheduling problems is presented that is a result of an ongoing investigation into strategies to model time in planning applications with greater efficacy. The BI model is a generalisation of the classical time indexed (TI) model to one in which at most two jobs can be processing in each time period. The planning horizon is divided into periods of equal length, but unlike the TI model, the length of a period is a parameter of the model and can be chosen to be as long as the processing time of the shortest job. The two models are equivalent if the problem data are integer and a period is of unit length, but when longer periods are used in the BI model, it can have significantly fewer variables and nonzeros than the TI model at the expense of a greater number of constraints. A computational study using weighted tardiness instances reveals the BI model significantly outperforms the TI model on instances where the mean processing time of the jobs is large and the range of processing times is small, that is, the processing times are clustered rather than dispersed.
Joint work with Natashia Boland and Riley Clement.
TBA
20 minute presentation followed by 10 minutes of questions and discussion.
Joint work with M. Mueller, B. O'Donoghue, and Y. Wang
We consider dynamic trading of a portfolio of assets in discrete periods over a finite time horizon, with arbitrary time-varying distribution of asset returns. The goal is to maximize the total expected revenue from the portfolio, while respecting constraints on the portfolio such as a required terminal portfolio and leverage and risk limits. The revenue takes into account the gross cash generated in trades, transaction costs, and costs associated with the positions, such as fees for holding short positions. Our model has the form of a stochastic control problem with linear dynamics and convex cost function and constraints. While this problem can be tractably solved in several special cases, such as when all costs are convex quadratic, or when there are no transaction costs, our focus is on the more general case, with nonquadratic cost terms and transaction costs.
We show how to use linear matrix inequality techniques and semidefinite programming to produce a quadratic bound on the value function, which in turn gives a bound on the optimal performance. This performance bound can be used to judge the performance obtained by any suboptimal policy. As a by-product of the performance bound computation, we obtain an approximate dynamic programming policy that requires the solution of a convex optimization problem, often a quadratic program, to determine the trades to carry out in each step. While we have no theoretical guarantee that the performance of our suboptimal policy is always near the performance bound (which would imply that it is nearly optimal) we observe that in numerical examples the two values are typically close.
In many problems in control, optimal and robust control, one has to solve global
optimization problems of the form: $\mathbf{P}:f^\ast=\min_{\mathbf x}\{f(\mathbf x):\mathbf x\in\mathbf K\}$, or, equivalently, $f^\ast=\max\{\lambda:f-\lambda\geq0\text{ on }\mathbf K\}$, where $f$ is a polynomial (or even a semi-algebraic function) and $\mathbf K$ is a basic semi-algebraic set. One may even need solve the "robust" version $\min\{f(\mathbf x):\mathbf x\in\mathbf K;h(\mathbf x,\mathbf u)\geq0,\forall \mathbf u\in\mathbf U\}$ where $\mathbf U$ is a set of parameters. For
instance, some static output feedback problems can be cast as polynomial optimization
problems whose feasible set $\mathbf K$ is defined by a polynomial matrix inequality (PMI). And
robust stability regions of linear systems can be modeled as parametrized polynomial
matrix inequalities (PMIs) where parameters $\mathbf u$ account for uncertainties and (decision)
variables x are the controller coefficients.
Therefore, to solve such problems one needs tractable characterizations of polynomials
(and even semi-algebraic functions) which are nonnegative on a set, a topic of independent
interest and of primary importance because it also has implications in many other areas.
We will review two kinds of tractable characterizations of polynomials which are non-negative on a basic closed semi-algebraic set $\mathbf K\subset\mathbb R^n$. The first type of characterization is
when knowledge on $\mathbf K$ is through its defining polynomials, i.e., $\mathbf K=\{\mathbf x:g_j(\mathbf x)\geq 0, j =1,\dots, m\}$, in which case some powerful certificates of positivity can be stated in terms of some sums of squares (SOS)-weighted representation. For instance, this allows to define a hierarchy fo semidefinite relaxations which yields a monotone sequence of lower bounds
converging to $f^\ast$ (and in fact, finite convergence is generic). There is also another way
of looking at nonnegativity where now knowledge on $\mathbf K$ is through moments of a measure
whose support is $\mathbf K$. In this case, checking whether a polynomial is nonnegative on $\mathbf K$
reduces to solving a sequence of generalized eigenvalue problems associated with a count-
able (nested) family of real symmetric matrices of increasing size. When applied to $\mathbf P$, this
results in a monotone sequence of upper bounds converging to the global minimum, which
complements the previous sequence of upper bounds. These two (dual) characterizations
provide convex inner (resp. outer) approximations (by spectrahedra) of the convex cone
of polynomials nonnegative on $\mathbf K$.
UPDATE: Abstract submission is now open.
The main thrust of this workshop will be exploring the interface between important methodological areas of infectious disease modelling. In particular, two main themes will be explored: the interface between model-based data analysis and model-based scenario analysis, and the relationship between agent-based/micro-simulation and modelling.
I will discuss some models of what a "random abelian group" is, and some conjectures (the Cohen-Lenstra heuristics of the title) about how they show up in number theory. I'll then discuss the function field setting and a proof of these heuristics, with Ellenberg and Westerland. The proof is an example of a link between analytic number theory and certain classes of results in algebraic topology ("homological stability").
It is well known that the Moore digraph, namely a diregular digraph of degree d, diameter k and order 1 + d + d 2 + ... + d k , only exists if d = 1 or k = 1. Let (d,k)-digraph be a diregular digraph of degree d ≥ 2, diameter k ≥ 2 and order d+d 2 +...+d k , one less than the Moore bound. Such a (d,k)-digraph is also called an almost Moore digraph.
The study of the existence of an almost Moore digraph of degree d and diameter k has received much attention. Fiol, Allegre and Yebra (1983) showed the existence of (d,2)-digraphs for all d ≥ 2. In particular, for d = 2 and k = 2, Miller and Fris (1988) enumerated all non-isomophic (2,2)-digraphs. Furthermore, Gimbert (2001) showed that there is only one (d,2)-digraph for d ≥ 3. However for de- gree 2 and diameter k ≥ 3, it is known that there is no (2,k)-digraph (Miller and Fris, 1992). Furthermore, it was proved that there is no (3,k)-digraph with k ≥ 3 (Baskoro, Miller, Siran and Sutton, 2005). Recently, Conde, Gimbert, Gonzáles, Miret, and Moreno (2008 & 2013) showed that no (d,k)-digraphs exist for k = 3,4 and for any d ≥ 2. Thus, the remaining case still open is the existence of (d,k)- digraphs with d ≥ 4 and k ≥ 5.
Several necessary conditions for the existence of (d,k)-digraphs, for d ≥ 4 and k ≥ 5, have been obtained. In this talk, we shall discuss some necessary conditions for these (d,k)-digraphs. Open problems related to this study are also presented.
Joint work with N. Parikh, E. Chu, B. Peleato, and J. Eckstein
Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features, training examples, or both. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. We argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, and support vector machines.
The related paper, code and talk slides are available at http://www.stanford.edu/~boyd/papers/admm_distr_stats.html.
It was understood by Minkowski that one could prove interesting results in number theory by considering the geometry of lattices in R(n). (A lattice is simply a grid of points.) This technique is called the "geometry of numbers". We now understand much more about analysis and dynamics on the space of all lattices, and this has led to a deeper understanding of classical questions. I will review some of these ideas, with emphasis on the dynamical aspects.
There exist a variety of mechanisms to share indivisible goods between agents. One of the simplest is to let the agents take turns to pick an item. This mechanism is parameterized by a policy, the order in which agents take turns. A simple model of this mechanism was proposed by Bouveret and Lang in 2011. We show that in their setting the natural policy of letting the agents alternate in picking items is optimal. We also present a number of potential generalizations and extensions.
This is joint work with Nina Narodytska and Toby Walsh.
TBA
Within a nonzero, real Banach space we study the problem of characterising a maximal extension of a monotone operator in terms of minimality properties of representative functions that are bounded by the Penot and Fitzpatrick functions. We single out a property of the space of representative functions that enable a very compact treatment of maximality and pre-maximality issues. As this treatment does not assume reflexivity and we characterises this property the existence of a counter example has a number of consequences for the search for a suitable certificate for maximality in non-reflexive spaces. In particular one is lead to conjecture that some extra side condition to the usual CQ is inevitable. We go on to look at the simplest such condition which is boundedness of the domain of the monotone operator and obtain some positive results.
Many successful non-convex applications of the Douglas-Rachford method can be viewed as the reconstruction of a matrix, with known properties, from a subset of its entries. In this talk we discuss recent successful applications of the method to a variety of (real) matrix reconstruction problems, both convex and non-convex.
This is joint work with Fran Aragón and Matthew Tam.
I will report on recent joint work (with J.Y. Bello Cruz, H.M. Phan, and X. Wang) on the Douglas–Rachford algorithm for finding a point in the intersection of two subspaces. We prove that the method converges strongly to the projection of the starting point onto the intersection. Moreover, if the sum of the two subspaces is closed, then the convergence is linear with the rate being the cosine of the Friedrichs angle between the subspaces. Our results improve upon existing results in three ways: First, we identify the location of the limit and thus reveal the method as a best approximation algorithm; second, we quantify the rate of convergence, and third, we carry out our analysis in general (possibly infinite-dimensional) Hilbert space. We also provide various examples as well as a comparison with the classical method of alternating projections.
Extremal graph theory includes problems of determining the maximum number of edges in a graph on $n$ vertices that contains no forbidden subgraphs. We consider only simple graphs with no loops or multiple edges and the forbidden subgraphs under consideration are cycles of length 3 and 4 (triangle and square). This problem was proposed by Erdos in 1975. Let $n$ denote the number of vertices in a graph $G$. By $ex(n; {C3,C4})$, or simply $ex(n;4)$ we mean the maximum number of edges in a graph of order $n$ and girth at least $g \geq 5$. There are only 33 exact values of $ex(n;4)$ currently known. In this talk, I give an overview of the current state of research in this problem, regarding the exact values, as well as the lower bound and the upper bound of the extremal numbers when the exact value is not known.
Popular accounts of evolution typically create an expectation that populations become ever better adapted over time, and some formal treatments of evolutionary processes suggest this too. However, such analyses do not highlight the fact that competition with conspecics has negative population-level consequences too, particularly when individuals invest in success in zero-sum games. My own work is at the interface of theoretical biology and empirical data, and I will discuss several examples where an adaptive evolutionary process leads to something that appears silly from the population point of view, including a heightened risk of extinction in the Gouldian finch, reduced productivity of species in which males do not participate in parental care, and deterministic extinction of local populations in systems that feature sexual parasitism.
Recently a great deal of attention from biologists has been directed to understanding the role of knots in perhaps the most famous of long polymers - DNA. In order for our cells to replicate, they must somehow untangle the approximately two metres of DNA that is packed into each nucleus. Biologists have shown that DNA of various organisms is non-trivially knotted with certain topologies preferred over others. The aim of our work is to determine the "natural" distribution of different knot-types in random closed curves and compare that to the distributions observed in DNA.
Our tool to understand this distribution is a canonical model of long chain polymers - self-avoiding polygons (SAPs). These are embeddings of simple closed curves into a regular lattice. The exact computation of the number of polygons of length n and fixed knot type K is extremely difficult - indeed the current best algorithms can barely touch the first knotted polygons. Instead of exact methods, in this talk I will describe an approximate enumeration method - which we call the GAS algorithm. This is a generalisation of the famous Rosenbluth method for simulating linear polymers. Using this algorithm we have uncovered strong evidence that the limiting distribution of different knot-types is universal. Our data shows that a long closed curve is about 28 times more likely to be a trefoil than a figure-eight, and that the natural distribution of knots is quite different from those found in DNA.
Let $G$ be a connected graph with vertex set $V$ and edge set $E$. The distance $d(u,v)$ between two vertices $u$ and $v$ in $G$ is the length of a shortest $u-v$ path in $G$. For an ordered set $W = \{w_1, w_2, ..., w_k\}$ of vertices and a vertex $v$ in a connected graph $G$, the code of $v$ with respect to $W$ is the $k$-vector \begin{equation} C_W(v)=(d(v,w_1),d(v,w_2), ..., d(v,w_k)). \end{equation} The set $W$ is a resolving set for $G$ if distinct vertices of $G$ have distinct codes with respect to $W$. A resolving set for $G$ containing a minimum number of vertices is called a minimum resolving set or a basis for $G$. The metric dimension, denoted, $dim(G)$ is the number of vertices in a basis for $G$. The problem of finding the metric dimension of an arbitrary graph is NP-complete.
The problem of finding minimum metric dimension is NP-complete for general graphs. Manuel et al. have proved that this problem remains NP-complete for bipartite graphs. The minimum metric dimension problem has been studied for trees, multi-dimensional grids, Petersen graphs, torus networks, Benes and butterfly networks, honeycomb networks, X-trees and enhanced hypercubes.
These concepts have been extended in various ways and studied for different subjects in graph theory, including such diverse aspects as the partition of the vertex set, decomposition, orientation, domination, and coloring in graphs. Many invariants arising from the study of resolving sets in graph theory offer subjects for applicable research.
The theory of conditional resolvability has evolved by imposing conditions on the resolving set. This talk is to recall the concepts and mention the work done so far and future work.
The rough Cayley graph is the analogue in the context of topological groups of the standard Cayley graph, which is defined for finitely generated group. It will be shown how it is possible to associate such a graph to a compactly generated totally disconnected and locally compact (t.d.l.c.) group and how the rough Cayley graph represents an important tool to study the structure of this kind of group.
We analyse local combinatorial structure in product sets of two subsets of a countable group which are "large" with respect to certain classes (not necessarily invariant) means on the group. As an example of such phenomenon, we can mention the result by Bergelson, Furstenberg and Weiss which says that the sumset of two sets of positive density in integers contains locally an almost-periodic set. In this theorem, large sets are the sets of positive density, and a combinatorial structure is an almost periodic set.
How do a student’s attitude, learning behaviour and achievement in mathematics or statistics relate to each other and how do these change during the course of their undergraduate degree program? These are some of the questions I have been addressing in a longitudinal study that I have undertaken as part of my PhD research. The questions were addressed by soliciting comments from students several times during their undergraduate degree programs; through an initial attitude survey, course-specific surveys for up to two courses each semester and interviews with students near the end of their degrees. In this talk I will introduce you to the attitudes and learning behaviours of the mathematics students I followed through the three years of my research, and discuss their responses to the completed surveys (attitude and course-specific). To illuminate the general responses obtained from the surveys (1074 students completed the initial attitude survey and 645 course-specific surveys were completed), I will also introduce you to Tom, Paul, Kate and Ben, four students of varying degrees of achievement, who I interviewed near the end of their mathematics degrees.
The split feasibility problem (SFP) consists in finding a point in a closed convex subset of a Hilbert space such that its image under a bounded linear operator belongs to a closed convex subset of another Hilbert space. Since its inception in 1994 by Censor and Elfving, it has received much attention thanks mainly to its applications to signal processing and image reconstruction. Iterative methods can be employed to solve the SFP. One of the most popular iterative method is Byrne's CQ algorithm. However, this algorithm requires prior knowledge (or at least an estimate) of the norm of the bounded linear operator. We introduce a stepsize selection method so that the implementation of the CQ algorithm does not need any prior information regarding the operator norm. Furthermore, a relaxed CQ algorithm, where the two closed convex sets are both level sets of convex functions, and a Halpern-type algorithm are studied under the same stepsize rule, yielding both weak and strong convergence. A more general problem, the Multiple-sets split feasibility problem, will be also presented. Numerical experiments are included to illustrate the applications to signal processing and, in particular, to compressed sensing and wavelet-based signal restoration.
Based on joint works with G. López and H-K Xu.
Our goal is to estimate the rate of growth of a population governed by a simple stochastic model. We may choose (n) sampling times at which to count the number of individuals present, but due to detection difficulties, or constraints on resources, we are able only to observe each individual with fixed probability (p). We discuss the optimal sampling times at which to make our observations in order to approximately maximize the accuracy of our estimation. To achieve this, we maximize the expected volume of information obtained from such binomial observations, that is the Fisher Information. For a single sample, we derive an explicit form of the Fisher Information. However, finding the Fisher Information for higher values of (n) appears intractable. Nonetheless, we find a very good approximation function for the Fisher Information by exploiting the probabilistic properties of the underlying stochastic process and developing a new class of delayed distributions. Both numerical and theoretical results strongly support this approximation and confirm its high level of accuracy.
A numerical method is proposed for constructing an approximation of the Pareto front of nonconvex multi-objective optimal control problems. First, a suitable scalarization technique is employed for the multi-objective optimal control problem. Then by using a grid of scalarization parameter values, i.e., a grid of weights, a sequence of single-objective optimal control problems are solved to obtain points which are spread over the Pareto front. The technique is illustrated on problems involving tumor anti-angiogenesis and a fed-batch bioreactor, which exhibit bang–bang, singular and boundary types of optimal control. We illustrate that the Bolza form, the traditional scalarization in optimal control, fails to represent all the compromise, i.e., Pareto optimal, solutions.
Joint work with Helmut Maurer.
C. Y. Kaya and H. Maurer, A numerical method for nonconvex multi-objective optimal control problems, Computational Optimization and Applications, (appeared online: September 2013, DOI 10.1007/s10589-013-9603-2)
In this talk I will discuss a method of finding simple groups acting on trees. I will discuss the theory behind this process and outline some proofs (time permitting).
The scale function plays a key role in the structure theory of totally disconnected locally compact (t.d.l.c.) groups. Whereas it is known that the scale function is continuous when acting on a t.d.l.c. group, analysis of the continuity of the scale in a wider context requires the topologization of the group of continuous automorphisms. Existing topologies for Aut(G) are outlined and shown to be insufficient for guaranteeing the continuity of the scale function. Possible methods of generalising these topologies are explored.
In this talk I will describe an algorithm to do a random walk in the space of all words equal to the identity in a finitely presented group. We prove that the algorithm samples from a well defined distribution, and using the distribution we can find the expected value for the mean length of a trivial word. We then use this information to estimate the cogrowth of the group. We ran the algorithm on several examples -- where the cogrowth series in known exactly our results are in agreement with the exact results. Running the algorithm on Thompson's group $F$, we see behaviour consistent with the hypothesis that $F$ is not amenable.
We propose and study a new method, called the Interior Epigraph Directions (IED)
method, for solving constrained nonsmooth and nonconvex optimization. The IED
method considers the dual problem induced by a generalized augmented Lagrangian
duality scheme, and obtains the primal solution by generating a sequence of
iterates in the interior of the dual epigraph. First, a deflected subgradient
(DSG) direction is used to generate a linear approximation to the dual
problem. Second, this linear approximation is solved using a Newton-like step.
This Newton-like step is inspired by the Nonsmooth Feasible Directions Algorithm
(NFDA), recently proposed by Freire and co-workers for solving unconstrained,
nonsmooth convex problems. We have modified the NFDA so that it takes advantage
of the special structure of the epigraph of the dual function. We prove that all
the accumulation points of the primal sequence generated by the IED method are
solutions of the original problem. We carry out numerical experiments by using
test problems from the literature. In particular, we study several instances of
the Kissing Number Problem, previously solved by various approaches such as an
augmented penalty method, the DSG method, as well as the popular differentiable
solvers ALBOX (a predecessor of ALGENCAN), Ipopt and LANCELOT. Our experiments
show that the quality of the solutions obtained by the IED method is comparable
with (and sometimes favourable over) those obtained by the other solvers mentioned.
Joint work with Wilhelm P. Freire and C. Yalcin Kaya.
This colloquium will explain some of the background and significance of the concept of amenability. Arguments with finite groups frequently, without remark, count the number of elements in a subset or average a function over the group. It is usually important in these arguments that the result of the calculation is invariant under translation. Such calculations cannot be so readily made in infinite groups but the concepts of amenability and translation invariant measure on a group in some ways take their place. The talk will explain this and also say how random walks relate to these same ideas.
The link to the animation of the paradoxical decomposition is here.
Times and Dates:
Mon 2 Dec 2013: 10-12, 2-4
Tue 3 Dec 2013: 10-12, 2-4
Wed 4 Dec 2013: 10-12, 2-4
Thu 5 Dec 2013: 10-12, 2-4
Abstract: This will be a short and fast introduction to the field of geometric group theory. Assumed knowledge is abstract algebra (groups and rings) and metric spaces. Topics to be covered include: free groups, presentations, quasiisometry, hyperbolic groups, Dehn functions, growth, amenable groups, cogrowth, percolation, automatic groups, CAT(0) groups, examples: Thompson's group F, self-similar groups (Grigorchuk group), Baumslag-Solitar groups.
In this talk, we provide some characterizations of ultramaximally monotone operators. We establish the Brezis--Haraux condition in the setting of a general Banach space. We also present some characterizations of reflexivity of a Banach space by a linear continuous ultramaximally monotone operator.
Joint work with Jon Borwein.
We develop an integer programming based decision support tool that quickly assesses the throughput of a coal export supply chain for a given level of demand. The tool can be used to rapidly evaluate a number of infrastructures for several future demand scenarios in order to identify a few that should be investigated more thoroughly using a detailed simulation model. To make the natural integer programming model computationally tractable, we exploit problem structure to reduce the number of variables and employ aggregation as well as disaggregation to strengthen the linear programming relaxation. Afterward, we implicitly reformulate the problem to exclude inherent symmetry in the original formulation and use Hall's marriage theorem to ensure its feasibility. Studying polyhedron structure of a sub-problem, we enhance the formulation by generating strong valid inequalities. The integer programming tool is used in a computational study in which we analyze system performance for different levels of demand to identify potential bottlenecks.
Psychologists and other experiment designers are often faced with the task of creating sets of items to be used in factorial experiments. These sets need to be as similar as possible to each other in terms of the items' given attributes. We name this problem Picking Items for Experimental Sets (PIES). In this talk I will discuss how similarity can be defined, mixed integer programs to solve PIES and heuristic methods.
I will also examine the popular integer programming heuristic, the feasibility pump. The feasibility pump aims to find an integer feasible solution for a MIP. I will be showing how using different projection algorithms, including Douglas-Rachford, added randomness and reformulating the projection spaces change the effectiveness of the heuristic.
A classical nonlinear PDE used for modelling heat transfer between concentric cylinders by fluid convection and also for modelling porous flow can be solved by hand using a low-order perturbation method. Extending this solution to higher order using computer algebra is surprisingly hard owing to exponential growth in the size of the series terms, naively computed. In the mid-1990's, so-called "Large Expression Management" tools were invented to allow construction and use of so-called "computation sequences" or "straight-line programs" to extend the solution to 11th order. The cost of the method was O(N^8) in memory, high but not exponential.
Twenty years of doubling of computer power allows this method to get 15 terms. A new method, which reduces the memory cost to O(N^4), allows us to compute to N=30. At this order, singularities can reliably be detected using the quotient-difference algorithm. This allows confident investigation of the solutions, for different values of the Prandtl number.
This work is joint with Yiming Zhang (PhD Oct 2013).
We consider a problem of minimising $f_1(x)+f_2(y)$ over $x \in X \subseteq R^n$ and $y \in Y \subseteq R^m$ subject to a number of extra coupling constraints of the form $g_1(x) g_2(y) \geq 0$. Due to these constraints, the problem may have a large number of local minima. For any feasible combination of signs of $g_1(x)$ and $g_2(y)$, the coupled problem is decomposable, and the resulting two problems are assumed to be easily solved. An approach to solving the coupled problem is presented. We apply it to solving coupled monotonic regression problems arising in experimental psychology.
I will review the creation and development of the concept of number and the role of visualisation in that development. The relationship between innate human capabilities on the one hand and mathematical research and education on the other will be discussed.
In this seminar I will review my recent work into Hankel determinants and their number theoretic uses. I will briefy touch on the p-adic evaluation of a particular determinant and comment on how Hankel determinants together with Padé approximants can be used in some irrationality proofs. A fundamental determinant property will be demonstrated and I will show what implications this holds for positive Hankel determinants and where we might go from here.
The previous assessment method for MCHA2000 - Mechatronic Systems (which is common to many other courses) allowed students collect marks from assessments and quizzes during the semester and pass the course without reaching a satisfactory level of competency in some topics. In 2013, we obtained permission from the President of Academic Senate to test a different assessment scheme that aimed at preventing students from passing without attaining a minimum level of competency in all topics of the course. This presentation discusses the assessment scheme tested and the results we obtained, which suggest that the proposed scheme makes a difference.
MCHA2000 is a course about modelling, simulation, and analysis of physical system dynamics. It is believed that the proposed model is applicable to other courses.
Bio: A/Prof Tristan Perez, Lecturer of MCHA2000. http://www.newcastle.edu.au/profile/tristan-perez