During my study leave in 2018 I have applied nonlinear stability analysis techniques to the Douglas-Rachford Algorithm, with the aim of shedding light on the interesting non-convex case, where convergence is often observed but seldom proven. The Douglas-Rachford Algorithm can solve optimisation and feasibility problems, provably converges weakly to solutions in the convex case, and constitutes a practical heuristic in non-convex cases. Lyapunov functions are stability certificates for difference inclusions in nonlinear stability analysis. Some other recent nonlinear stability results are showcased as well.
Zombies are a popular figure in pop culture/entertainment and they are usually portrayed as being brought about through an outbreak or epidemic. Consequently, we model a zombie attack, using biological assumptions based on popular zombie movies. We introduce a basic model for zombie infection, determine equilibria and their stability, and illustrate the outcome with numerical solutions. We then refine the model to introduce a latent period of zombification, whereby humans are infected, but not infectious, before becoming undead. We then modify the model to include the effects of possible quarantine or a cure. Finally, we examine the impact of regular, impulsive reductions in the number of zombies and derive conditions under which eradication can occur. We show that only quick, aggressive attacks can stave off the doomsday scenario: the collapse of society as zombies overtake us all.
In this talk, I will show how to build $C^*$-algebras using a family of local homeomorphisms. Then we will compute the KMS states of the resulted algebras using Laca-Neshveyev machinery. Then I will apply this result to $C^*$-algebras of $K$-graphs and obtain interesting $C^*$-algebraic information about $k$-graph algebras. This talk is based on a joint work with Astrid an Huef and Iain Raeburn.
The KMS condition for equilibrium states of C*-dynamical systems has been around since the 1960’s. With the introduction of systems arising from number theory and from semigroup dynamics following pioneering work of Bost and Connes, their study has accelerated significantly in the last 25 years. I will give a brief introduction to C*-dynamical systems and their KMS states and discuss two constructions that exhibit fascinating connections with key open questions in mathematics such as Hilbert’s 12th problem on explicit class field theory and Furstenberg’s x2 x3 conjecture.
Using a variant of the Laca-Raeburn program for calculating KMS states, Laca, Raeburn, Ramagge and Whittaker showed that, at any inverse temperature above a critical value, the KMS states arising from self-similar actions of groups (or groupoids) $G$ are parameterised by traces on C*(G). The parameterisation takes the form of a self-mapping \chi of the trace space of C*(G) that is built from the structure of the stabilisers of the self-similar action. I will outline how this works, and then sketch how to see that \chi has a unique fixed-point, which picks out the ``preferred" trace of C*(G) corresponding to the only KMS state that persists at the critical inverse temperature. The first part of this will be an exposition of results of Laca-Raeburn-Ramagge-Whittaker. The second part is joint work with Joan Claramunt.
The problem of packing space with regular tetrahedra has a 2000 year history. This talk surveys the history of work on the problem. It includes work by mathematicians, computer scientists, physicists, chemists, and materials scientists. Much progress has been made on it in recent years, yet there remain many unsolved problems.
In this talk, we will present a brief overview of mathematical diffraction of structures with no translational symmetry but are not ruled out to exhibit long-range order. We introduce aperiodic tilings as toy models for such structures and discuss the relevant measure-theoretic formulation of the diffraction analysis. In particular, we focus on the component of the diffraction that suggests stochasticity but can be non-trivial for deterministic systems, and how its absence can be confirmed using some techniques involving Lyapunov exponents and Mahler measures. This is joint work with Michael Baake, Michael Coons, Franz Gaehler and Uwe Grimm.
Mahler's method in number theory is an area wherein one answers questions surrounding the transcendence and algebraic independence of both power series $F(z)$, which satisfy the functional equation $$a_0(z)F(z)+a_1(z)F(z^k)+\cdots+a_d(z)F(z^{k^d})=0$$ for some integers $k\geqslant 2$ and $d\geqslant 1$ and polynomials $a_0(z),\ldots,a_d(z)$, and their special values $F(\alpha)$, typically at algebraic numbers $\alpha$. The most important examples of Mahler functions arise from important sequences in theoretical computer science and dynamical systems, and many are related to digital properties of sets of numbers. For example, the generating function $T(z)$ of the Thue-Morse sequence, which is known to be the fixed point of a uniform morphism in computer science or equivalently a constant-length substitution system in dynamics, is a Mahler function. In 1930, Mahler proved that the numbers $T(\alpha)$ are transcendental for all non-zero algebraic numbers $\alpha$ in the complex open unit disc. With digital computers and computation so prevalent in our society, such results seem almost second nature these days and thinking about them is very natural. But what is one really trying to communicate by proving that functions or numbers such as those considered in Mahler's method?
In this talk, highlighting work from the very beginning of Mahler's career, we speculate---and provide some variations---on what Mahler was really trying to understand. This talk will combine modern and historical methods and will be accessible to students.
This project aims to investigate algebraic objects known as 0-dimensional groups, which are a mathematical tool for analysing the symmetry of infinite networks. Group theory has been used to classify possible types of symmetry in various contexts for nearly two centuries now, and 0-dimensional groups are the current frontier of knowledge. The expected outcome of the project is that the understanding of the abstract groups will be substantially advanced, and that this understanding will shed light on structures possessing 0-dimensional symmetry. In addition to being cultural achievements in their own right, advances in group theory such as this also often have significant translational benefits. This will provide benefits such as the creation of tools relevant to information science and researchers trained in the use of these tools.
The project aims to develop novel techniques to investigate Geometric analysis on infinite dimensional bundles, as well as Geometric analysis of pathological spaces with Cantor set as fibre, that arise in models for the fractional quantum Hall effect and topological matter, areas recognised with the 1998 and 2016 Nobel Prizes. Building on the applicant's expertise in the area, the project will involve postgraduate and postdoctoral training in order to enhance Australia's position at the forefront of international research in Geometric Analysis. Ultimately, the project will enhance Australia's leading position in the area of Index Theory by developing novel techniques to solve challenging conjectures, and mentoring HDR students and ECRs.
This project aims to solve hard, outstanding problems which have impeded our ability to progress in the area of quantum or noncommutative calculus. Calculus has provided an invaluable tool to science, enabling scientific and technological revolutions throughout the past two centuries. The project will initiate a program of collaboration among top mathematical researchers from around the world and bring together two separate mathematical areas into a powerful new set of tools. The outcomes from the project will impact research at the forefront of mathematical physics and other sciences and enhance Australia's reputation and standing.
Imagine a world, where physical and chemical laboratories are unnecessary, because all experiments can be simulated accurately on a computer. In principle this is possible, solving the quantum mechanical Schrödinger equation. Unfortunately, this is far from trivial and practically impossible for large and complex materials and reactions. In 1998, Walter Kohn and John A Pople won the Nobel Prize in Chemistry for developing the density-functional theory (DFT). DFT allows to find solutions for the Schrödinger equation much more efficiently than ab-initio and similar approaches, thus enabling the computation of materials properties in an unprecedented way. In this seminar, I will introduce quantum mechanical principles and the basic idea of the DFT. Then, I will present an example of the computational elucidation of a reaction mechanism in materials science.
The old joke is that a topologist can’t distinguish between a coffee cup and a doughnut. A recent variant of Homology, called Persistent Homology, can be used in data analysis to understand the shape of data. I will give an introduction to persistent Homology and describe two example applications of this tool.
I introduce and demonstrate the Coq assisted theorem prover.
It is commonly expected that $e$, $\log 2$, $\sqrt{2}$, among other « classical » numbers, behave, in many respects, like almost all real numbers. For instance, their decimal expansion should contain every finite block of digits from $\{0, \ldots , 9\}$. We are very far away from establishing such a strong assertion. However, there has been some small recent progress in that direction. Let $\xi$ be an irrational real number. Its irrationality exponent, denoted by $\mu (\xi)$, is the supremum of the real numbers $\mu$ for which there are infinitely many integer pairs $(p, q)$ such that $|\xi - \frac{p}{q}| < q^{-\mu}$. It measures the quality of approximation to $\xi$ by rationals. We always have $\mu (\xi) \ge 2$, with equality for almost all real numbers and for irrational algebraic numbers (by Roth's theorem). We prove that, if the irrationality exponent of $\xi$ is equal to $2$ or slightly greater than $2$, then the decimal expansion of $\xi$ cannot be `too simple', in a suitable sense. Our result applies, among other classical numbers, to badly approximable numbers, non-zero rational powers of ${{\rm e}}$, and $\log (1 + \frac{1}{a})$, provided that the integer $a$ is sufficiently large. It establishes an unexpected connection between the irrationality exponent of a real number and its decimal expansion.
Motivating in constructing conformal field theories Jones recently discovered a very general process that produces actions of the Thompson groups $F$,$T$ and $V$ such as unitary representations or actions on $C^{\ast}$-algebras. I will give a general panorama of this construction along with many examples and present various applications regarding analytical properties of groups and, if time permits, in lattice theory (e.g. quantum field theory).
Let $t$ be the the multiplicative inverse of the golden mean. In 1995 Sean Cleary introduced the irrational-slope Thompson's group $F_t$, which is the group of piecewise-linear maps of the interval $[0,1]$ with breaks in $Z[t]$ and slopes powers of $t$. In this talk we describe this group using tree-pair diagrams, and then demonstrate a finite presentation, a normal form, and prove that its commutator subgroup is simple. This group is the first example of a group of piecewise-linear maps of the interval whose abelianisation has torsion, and it is an open problem whether this group is a subgroup of Thompson's group $F$.
A Jonsson-Tarski algebra is a set X endowed with an
isomorphism $X\to XxX$. As observed by Freyd, the category of
Jonsson-Tarski algebras is a Grothendieck topos - a highly structured
mathematical object which is at once a generalised topological space,
and a generalised universe of sets.
In particular, one can do algebra, topology and functional analysis
inside the Jonsson-Tarski topos, and on doing so, the following objects
simply pop out: Cantor space; Thompson's group V; the Leavitt algebra
L2; the Cuntz semigroup S2; and the reduced $C^{\ast}-algebra of S2. The first
objective of this talk is to explain how this happens.
The second objective is to describe other "self-similar toposes"
associated to, for example, self-similar group actions, directed graphs
and higher-rank graphs; and again, each such topos contains within it a
familiar menagerie of algebraic-analytic objects. If time permits, I
will also explain a further intriguing example which gives rise to
Thompson's group F and, I suspect, the Farey AF algebra.
No expertise in topos theory is required; such background as is
necessary will be developed in the talk.
Sea ice acts as a refrigerator for the world. Its bright surface reflects solar heat, and the salt it expels during the freezing process drives cold water towards the equator. As a result, sea ice plays a crucial role in our climate system. Antarctic sea-ice extent has shown a large degree of regional variability, in stark contrast with the steady decreasing trend found in the Arctic. This variability is within the ranges of natural fluctuations, and may be ascribed to the high incidence of weather extremes, like intense cyclones, that give rise to large waves, significant wind drag, and ice deformation. The role exerted by waves on sea ice is still particular enigmatic and it has attracted a lot of attention over the past years. Starting from theoretical knowledge, new understanding based on experimental models and computational fluid dynamics is presented. But exploration of waves-in-ice cannot be exhausted without being on the field. And this is why I found myself in the middle of the Southern Ocean during a category five polar cyclone to measure waves…
The models of collective decision-making considered in this presentation are nonlinear interconnected systems with saturating interactions, similar to Hopfield newtorks. These systems encode the possible outcomes of a decision process into different steady states of the dynamics. When the model is cooperative, i.e., when the underlying adjacency graph is Metzler, then the system is characterized by the presence of two main attractors, one positive and the other negative, representing two choices of agreement among the agents, associated to the Perron-Frobenius eigenvector of the system. Such equilibria are achieved when there is a sufficiently high 'social commitment' among the agent (here interpreted as a bifurcation parameter). When instead cooperation and antagonism coexist, the resulting signed graph is in general not structurally balanced, meaning that Perron-Frobenius theorem does not apply directly. It is shown that the decision-making process is affected by the distance to structural balance, in the sense that the higher the frustration of the graph, the higher the commitment strength at which the system bifurcates. In both cases, it is possible to give conditions on the commitment strength beyond which other equilibria start to appear. These extra bifurcations are related to the algebraic connectivity of the graph.
We investigate the construction of multidimensional prolate spheroidal wave functions using techniques from Clifford analysis. The prolates are eigenfunctions of a time-frequency limiting operator, but we show that they are also eigenfunctions of a differential operator. In an effort to compute solutions of this operator, we prove a Bonnet formula for a class of Clifford-Gegenbauer polynomials.
We discuss various optimisation-based approaches to machine learning. Tasks include regression, clustering, and classification. We discuss frequently used terms like 'unsupervised learning,' 'penalty methods,' and 'dual problem.' We motivate our discussion with simple examples and visualisations.
Calculus of variations is utilized to minimize the elastic energy arising from the curvature squared while maximizing the van der Waals energy. Firstly, the shape of folded graphene sheets is investigated, and an arbitrary constant arising by integrating the Euler–Lagrange equation is determined. In this study, the structure is assumed to have a translational symmetry along the fold, so that the problem may be reduced to a two dimensional problem with reflective symmetry across the fold. Secondly, both variational calculus technique and least squared minimization procedure are employed to determine the joining structure involved a C60 fullerene and a carbon nanotube, namely a nanobud. We find that these two methods are in reasonable overall agreement. However, there is no experimental or simulation data to determine which procedure gives the more realistic results.
For linear and nonlinear dynamical systems, control problems such as feedback stabilization of target sets and feedback laws guaranteeing obstacle avoidance are topics of interest throughout the control literature. While the isolated problems (i.e., guaranteeing only stability or avoidance) are well understood, the combined control problem guaranteeing stability and avoidance simultaneously is leading to significant challenges even in the case of linear systems. In this talk we highlight difficulties in the controller design with conflicting objectives in terms of guaranteed avoidance of bounded sets and asymptotic stability of the origin. In addition, using the framework of hybrid systems, we propose a partial solution to the combined control problem for underactuated linear systems.
In this talk, I will survey some of the famous quotient algorithms that can be used to compute efficiently with finitely presented groups. The last part of the talk will be about joint work with Alexander Hulpke (Colorado State University): we have looked at quotient algorithms for non-solvable groups, and I will report on the findings so far.
In computer science, an isomorphism testing problem asks whether two objects are in the same orbit under a group action. The most famous problem of this type has been the graph isomorphism problem. In late 2015, L. Babai announced a quasipolynomial-time algorithm for the graph isomorphism problem, which is widely regarded as a breakthrough in theoretical computer science. This leads to a natural question, that is, which isomorphism testing problems should naturally draw our attention for further exploration?
The Galois group of a polynomial is the automorphism group of its splitting field. These automorphisms act by permuting the roots of the polynomial so that a Galois group will be a subgroup of a symmetric group. Using the Galois group the splitting field of a polynomial can be computed more efficiently than otherwise, using the knowledge of the symmetries of the roots. I will present an algorithm developed by Fieker and Klueners, which I have extended, for computing Galois groups of polynomials over arithmetic fields as well as approaches to computing splitting fields using the symmetries of the roots.
The history of projection methods goes back to von Neumann and his method of alternating projections for finding a point in the intersection of two linear subspaces. These days the method of alternating projections and its various modifications, such as the Douglas-Rachford algorithm, are successfully used to solve challenging feasibility and optimisation problems. The convergence of projection methods (and its rate) depends on the structure of the sets that comprise the feasibility problem, and also on their position relative to each other. I will survey a selection of results, focusing on the impact of the geometry of the sets on the convergence.
In the past decade, the research area of arithmetic dynamics has grown in prominence. This area considers iterated maps as dynamical systems, acting on the integers, the rationals or on finite fields (meaning there is a finite phase space in the last case). Tools used to investigate arithmetic dynamics include combinatorics, arithmetic geometry, number theory, graph theory as well as numerical experimentation. There are important applications of arithmetic dynamical systems in cryptography. I will survey some of our investigations in arithmetic dynamics which have been motivated by the order and chaos divide in Hamiltonian dynamics.
Recently, second-order methods have shown great success in a variety of machine learning applications. However, establishing convergence of the canonical member of this class, i.e., the celebrated Newton's method, has long been limited to making restrictive assumptions on (strong) convexity. Furthermore, smoothness assumptions, such as Lipschitz continuity of the gradient/Hessian, have always been an integral part of the analysis. In fact, it is widely believed that in the absence of well-behaved and continuous Hessian, the application of curvature can hurt more so that it can help. This has in turn limited the application range of the classical Newton’s method in machine learning. To set the scene, we first briefly highlight some recent results, which shed light on the advantages of Newton-type methods for machine learning, as compared with first-order alternatives. We then turn our focus to a new member of this class, Newton-MR, which is derived using two seemingly simple modifications of the classical Newton’s method. We show that, unlike the classical Newton’s method, Newton-MR can be applied, beyond the traditional convex settings, to invex problems. Newton-MR appears almost indistinguishable from its classical counterpart, yet it offers a diverse range of algorithmic and theoretical advantages. Furthermore, by introducing a weaker notion of joint regularity of Hessian and gradient, we show that Newton-MR converges globally even in the absence of the traditional smoothness assumptions. Finally, we obtain local convergence results in terms of the distance to the set of optimal solutions. This greatly relaxes the notion of “isolated minimum”, which is required for the local convergence analysis of the classical Newton’s method. Numerical simulations using several machine learning problems demonstrate the great potential of Newton-MR as compared with several other second-order methods.
Joris van der Hoeven and I recently discovered an algorithm that computes the product of two $n$-bit integers in $O(n \log n)$ bit operations. This is asymptotically faster than all previous known algorithms, and matches the complexity bound conjectured by Schönhage and Strassen in 1971. In this talk, I will discuss the history of integer multiplication, and give an overview of the new algorithm. No previous background on multiplication algorithms will be assumed.
Knuth showed that a permutation can be sorted by passing it right-to-left through an infinite stack if and only if it \emph{avoids} a certain forbidden sub-pattern (231). Since then, many variations have been studies. I will describe some of these including new work of my PhD student Andrew Goh on stacks in series and ``pop-stacks".
The dimer model is the finite discrete prototype for problems studied by different scientific communities. From the mathematical point of view a simple question arises which is how many dimer configurations are possible in a certain lattice geometry. Typically in the closed-packed arrangement, where the whole lattice space is covered by dimers, different types of dimers organise in a non-homogeneous manner and under certain conditions it results in a separation of phases characterised by distinct patterns of configurations.The formulation of the dimer model as an integrable two-dimensional lattice model of statistical mechanics opens the path to an investigation about the conformal properties of dimers in the continuum scaling limit. The classification of dimers as Gaussian free-field theory or Logarithmic field theory is still being debated for reasons that will be addressed and explained. This is an example of application of conformal invariance to a statistical model at criticality.
We present three bivariate spline approaches to the scattered data problem. The splines are defined as the minimiser of a penalised least squares functional. The penalties are based on partial differentiation operators, and are integrated using the finite element method. We apply these methods to two problems: to remove the mixture of Gaussian and impulsive noise from an image, and to recover a continuous function from a set of noisy observations. Supervisor: Bishnu Lamichhane
I will discuss my Honours work on Stabilisation of Finite Element Schemes for the Stokes Problem. In this work, we use a bi-orthogonal system in our stabilisation term. Supervisor: Bishnu Lamichhane
We investigate the regular action on a regular rooted tree induced by abelian groups satisfying property R_n. From this we construct all abelian groups satisfying property R_n when the number of children is prime. Supervisors: George Willis, Andrew Kepert
An important result of X.-J. Wang states that a convex ancient solution to mean curvature flow either sweeps out all of space or lies in a stationary slab (the region between two fixed parallel hyperplanes). We will describe recent results on the construction and classification of convex ancient solutions and convex translating solutions to mean curvature flow which lie in slab regions, highlighting the connection between the two. Work is joint with Theodora Bourni and Giuseppe Tinaglia.
The honeycomb toroidal graphs are a family of graphs I have been looking at now and then for thirty years. I shall discuss an ongoing project dealing with hamiltonicity as well as some of their properties which have recently interested the computer architecture community.
Finite generalised polygons are the rank 2 irreducible spherical buildings, and include projective planes and the generalised quadrangles, hexagons, and octagons. Since the early work of Ostrom and Wagner on the automorphism groups of finite projective planes, there has been great interest in what the automorphism groups of generalised polygons can be, and in particular, whether it is possible to classify generalised polygons with a prescribed symmetry condition. For example, the finite Moufang polygons are the 'classical' examples by a theorem of Fong and Seitz (1973-1974) (and the infinite examples were classified in the work of Tits and Weiss (2002)). In this talk, we give an overview of some recent results on the study of symmetric finite generalised polygons, and in particular, on the work of the speaker with Cai Heng Li and Eric Swartz.
In this talk I'll describe some recent discoveries about edge-transitive graphs and edge-transitive maps. These are objects that have received relatively little attention compared with their vertex-transitive and arc-transitive siblings.
First I will explain a new approach (taken in joint work with Gabriel Verret) to finding all edge-transitive graphs of small order, using single and double actions of transitive permutation groups. This has resulted in the determination of all edge-transitive graphs of order up to 47 (the best possible just now, because the transitive groups of degree 48 are not known), and bipartite edge-transitive graphs of order up to 63. It also led us to the answer to a 1967 question by Folkman about the valency-to-order ratio for regular graphs that are edge- but not vertex-transitive.
Then I'll describe some recent work on edge-transitive maps, helped along by workshops at Oaxaca and Banff in 2017. I'll explain how such maps fall into 14 natural classes (two of which are the classes of regular and chiral maps), and how graphs in each class may be constructed and analysed. This will include the answers to some 18-year-old questions by Širáň,
Tucker and Watkins about the existence of particular kinds of such maps on orientable and non-orientable surfaces.
We consider an L2-gradient flow of closed planar curves whose corresponding evolution equations is sixth order. Given a smooth initial curve we show that the solution to the flow exists for all time and, provided the length of the evolving curve remains bounded, smoothly converges to a multiply-covered circle. Moreover, we show that curves in any homotopy class with initially small L3‖ks‖2 enjoy a uniform length bound under the flow, yielding the convergence result in these cases. We also give some partial results for figure-8 type solutions to the flow. This is joint work with Ben Andrews, Glen Wheeler and Valentina-Mira Wheeler.
In many engineering problems, physical phenomena could occur at different length and time scales and they are almost impossible to be described by a single mathematical model. More importantly, in such problems, small-scale physical phenomena could dramatically change macroscopic properties of the system. Over last few decades, particle-based methods have become a powerful tool that allows to model concerned physical phenomena at any length and time scales. In this talk, I’ll introduce some widely-used particle-based methods and share some of my experience in development of particle-based mathematical models for engineering problems.
One of CARMA's goals is to foster an environment which provides guidance and support for
what we might call "technical research issues". Broadly, this has meant that CARMA has used
its resources to offer its members technical capabilities which were not readily available
elsewhere, such as collaborative file-sharing, accessible "rich videoconferencing", web site
hosting and web app development, high-performance computing, research software and
visualisation tools like 3-D rendering and 3-D printing. Over the past 10 years, some of
these resources have become available from other sources, including the University of
Newcastle, and for those facilities, CARMA provides guidance about how to access and use
them, as well as for other university systems.
This talk will cover the technical services which CARMA can help you with.
This is a talk for CARMA members, and a light lunch will be served at the start. Please RSVP
for catering purposes to Juliane Turner( Juliane.Turner@newcastle.edu.au).
RHD students are particularly encouraged to attend; please pass this on to your students
if they are not already engaged with CARMA.
This talk will be about the new course in mathematics at the University of Newcastle, MATH2005, Einstein, Bach and the Taj Mahal: Symmetry in the Arts, Sciences and Humanities. The course handbook description is:
Symmetry is an organising principle that plays a role, often unrecognised, in a vast range of disciplines, from mathematics and the physical sciences to music, design and the arts. This course aims to introduce students from a variety of disciplines to symmetry and its consequences. While symmetry is associated with beauty, balance and harmony, it is also associated with conservation, stasis and boredom, and on its own symmetry is not enough to explain the richness, diversity and dynamism of the universe. In contrast, the concept of symmetry breaking is associated with transitions and evolution, and linked to self-organisation, emergent behaviour and the appearance of information.
Beyond what is learnt about symmetry and symmetry breaking in this course, it is hoped that the concepts will challenge and change the thinking of students as they approach future subjects in their own disciplines.
We present an accurate database of diffusion properties of Ni-Zr melts generated within the framework of the molecular-dynamics method in conjunction with a semi-empirical many-body interatomic potential. The reliability of the model description of Ni-Zr melts is confirmed via comparison of our simulation results with the existing experimental data on diffusion properties on Ni-Zr melts. A statistical mechanical formalism is employed to understand the behaviour of the cross-correlation between the interdiffusion flux and the force caused by the difference in the average random accelerations of atoms of different species in the short time limit. Through further investigation of the diffusion dynamics, the most viable composition range for glass formers is identified.
We will discuss what are the patterns that necessarily occur in sets of positive density in homogeneous trees and certain affine buildings. Based on joint work with Michael Bjorklund (Chalmers) and James Parkinson (University of Sydney).
For some time now, I have been trying to understand the intricacy and complexity of integer sequences from a variety of different viewpoints and at least at some level trying to reconcile these viewpoints. However vague that sounds—and it certainly is vague to me—in this talk I hope to explain this sentiment a bit. While a variety of results will be considered, I will focus closely on two examples of wider interest, the Thue—Morse sequence and the set of $k$-free integers.
I offer a leisurely introduction to the 'what', 'why', 'what exactly' and 'how' of my research, which revolves around groups acting on trees. Following a motivation of the subject by situating said groups within the broader theory of all groups, I explain the meaning of 'local' and 'global' in this context. With some examples of groups acting on trees at hand, I illustrate how, in general, 'local' actions have 'global' implications. (Credit to Alejandra Garrido for the title!)
The Great Barrier Reef (GBR) is under threat. After climate change, water quality is recognised as the greatest stressor on the GBR. Sediments eroded from the catchment are transported to the marine environment, leading to poor water quality in the GBR lagoon. Suspended sediment reduces light availability and impedes seagrass growth. Sedimentation can bury coral polyps, cause tissue necrosis, and reduce the recruitment and survival of coral larvae leading to coral reef decline. Sediment can also transport nutrients into the lagoon, potentially leading to eutrophication, algal blooms, and Crown of Thorns Starfish outbreaks. Gullies, particularly in grazing areas, have been identified as leading contributors to sediment reaching the GBR lagoon, despite occupying a small proportion of the landscape. Reducing gully erosion is critical to improving water quality of the GBR, however the current pace of change is insufficient to achieve water quality targets. To guide investment, improved mathematical models of gully erosion are sought that can better assess the efficiency of remediation actions.
Historically, local gully erosion processes have been poorly represented by models. Empirical and conceptual models have been developed to provide guidance on the expected annual average sediment supply from gullied areas, however these are poorly suited to inform interventions or guide investment. In this talk we develop a process-based model to describe the erosion of sediment from an ideal alluvial gully or linear form. We explore this model from the lens of supporting investment decisions to remediate gullied landscapes and demonstrate how the model can be applied in this context.
It will be seen in this talk that certain geometrical theorems may be proved rigorously by checking only three cases. The idea is what Doron Zeilberger calls an 'Ansatz' -- that once we know the general form of a solution we can find the exact solution by checking a few cases. He gives examples where formulae usually established by induction can in fact be proved by checking a small number of cases. We shall do the same for Napoleon's Theorem and also for geometric theorems which don't seem to have been known either to the ancients or to Napoleon Bonaparte.
High-school mathematics only will be assumed. Themes such as computer algebra and notions of proof will be touched on, as will the historical context of ideas such as calculus and complex numbers seen in first-year university mathematics courses.
A talk given at ACSME. Joint work with Joel Black, Naomi Borwein, Florian Breuer, Peter Ellerton, Jo-Ann Larkins and Malcolm Roberts.
Bryan will be talking about his work with Western EcoSystems Technology Inc. in the USA where his work involves projects assessing endangered fish in rivers and fisheries bycatch of marine mammals and birds using change point methods. Sampling designs and analysis for the Alaska Marine Mammal Observer Program will be discussed as will various analyses for the San Luis and Delta-Mendota Water Association and the Metropolitan Water Association of Southern California. Research for the U.S. Army Corps of Engineers involves predicting fish survival rates in dams in the Columbia River Basin using the virtual/paired release method.
This talk will focus on the basic concepts of first-principles molecular dynamics (FPMD) and on some related applications developed within our team at IPCMS in Strasbourg. We are interested in achieving quantitative predictions for materials at the atomic scale by relying on models based on an appropriate account of chemical bonding. This scheme allows for the production of temporal trajectories ensuring the connection between statistical mechanics and macroscopic properties. FPMD lies at the crossroad between molecular dynamics and density functional theory, this latter playing the role of potential energy depending on both the atomic and electronic structure of the system. Examples will be provided for several areas within computational materials science, with special emphasis on disordered materials.
Classic modelling of biological systems assumes the length scale of interaction is far less than the modelling length scale. However, biological interactions can occur over long ranges via mechanisms such as sight and smell. It is possible to capture these interactions using classic conservation laws with a non-local velocity term. In this talk I will present various applications of non-local modelling from the modelling of phagocytosis at a single cell level up to the swarming behaviour of locusts. I will also look at various analysis and simulation techniques needed to approach these problems. Finally, I will present future goals and direction for my work.
Systems with small parameters are often studied using asymptotic techniques. Despite the ubiquity of these techniques, many classical asymptotic methods are unable to capture behaviour that occurs on an exponentially small scale, which lies "beyond all orders" of power series in the small parameter. Typically this does not cause any issues; this behaviour is too small to have a measurable impact on the overall behaviour of the system. I will showcase two systems in which exponentially small contributions have a significant effect on the overall system behaviour.
The first system, which I will discuss in detail, will be nonlinear waves propagating through particle chains with periodic masses. I will show that it is typically possible for Toda and FPUT lattices for certain combinations of parameters - determined by the exponentially small system behaviour - to produce solitary waves that propagate indefinitely. The second system, which I will discuss more briefly, will be the shape of bubbles in a steadily translating Hele-Shaw cell. By studying exponentially small effects, it is possible to construct exotic bubble shapes which correspond to recent laboratory experiments.
Self-similarity (when part of an object is a scaled version of the whole) is one of the most basic forms of symmetry. While known and used since ancient times, its use and investigation took off in the 1980s thanks to the advent of fractals, whose infinite self-similar structure has captured the imagination of mathematicians and lay people alike.
Self-similar fractals are highly symmetrical, so much so that even their symmetry groups exhibit self-similarity. In this talk, I will introduce and discuss groups which are self-similar, or fractal, in an algebraic sense; their connections to fractals, symbolic dynamics and automata theory; how they produce fascinating new examples in group theory, and some research questions in this lively new area.
A non-singular measurable dynamical system is a measure space $X$ whose measure $\mu$ has the property that $\mu $ and $\mu \circ T$ are equivalent measures (in the sense that they have the same sets of measure zero).
Here $T$ is a bimeasurable invertible transformation of $X$. The basic building blocks are the \emph{ergodic} measures.
Von Neumann proposed a classification of non-singular ergodic dynamical systems, and this has been elaborated subsequently by Krieger, Connes and others. This work has deep connections with C*-algebras.
I will describe some work of myself, collaborators and students which explore the classification of dynamical systems from the point of view of measure theory. In particular, we have recently been exploring the notion of critical dimension, a study of the rate of growth of sums of Radon-Nikodym derivatives $\Sigma_{k=1}^n \frac{d\mu \circ T^k}{d\mu}$. Recently, we have been replacing the single transformation $T$ with a group acting on the space $X$.
Let $X$ be the Cantor set and let $g$ be a minimal homeomorphism of $X$ (that is, every orbit is dense). Then the topological full group $\tau[g]$ of $g$ consists of all homeomorphisms $h$ of $X$ that act 'piecewise' as powers of $g$, in other words, $X$ can be partitioned into finitely many clopen pieces $X_1,...,X_n$ such that for each $i$, $h$ acts on $X_i$ as a constant power of $g$. Such groups have attracted considerable interest in dynamical systems and group theory, for instance they characterize the homeomorphism up to flip conjugacy (Giordano--Putnam--Skau) and they provided the first known examples of infinite finitely generated simple amenable groups (Juschenko--Monod). My talk is motivated by the following question: given $h\in\tau[g]$ for some minimal homeomorphism $g$, what can the closures of orbits of $h$ look like?
Certainly $h\in\tau[g]$ is not minimal in general, but it turns out to be quite close to being minimal, in the following sense: there is a decomposition of $X$ into finitely many clopen invariant pieces, such that on each piece $h$ acts a homeomorphism that is either minimal or of finite order. Moreover, on each of the minimal parts of $h$, then either $h$ or $h^{-1}$ has a 'positive drift' with respect to the orbits of $g$; in fact, it can be written in a canonical way as a conjugate of a product of induced transformations (aka first return maps) of $g$.
No background knowledge of topological full groups is required; I will introduce all the necessary concepts in the talk.
I will outline a new theory of fractal tilings. The approach uses graph iterated function systems (IFS) and centers on underlying symbolic shift spaces. These provide a zero dimensional representation of the intricate relationship between shift dynamics on fractals and renormalization dynamics on spaces of tilings. The ideas I will describe unify, simplify, and substantially extend key concepts in foundational papers by Solomyak, Anderson and Putnam, and others. In effect, IFS theory on the one hand, and self-similar tiling theory on the other, are unified.
The work presented is largely new and has not yet been submitted for publication. It is joint work with Andrew Vince (UFL) and Louisa Barnsley. The presentation will include links to detailed notes. The figures illustrate 2d fractal tilings.
By way of recommended background reading I mention the following awardwinning paper: M. F. Barnsley, A. Vince, Self-similar polygonal tilings, Amer. Math. Monthly 124 (1017) 905-921.
Automorphism groups of complexes are a productive area of study not only for studying the structure of complexes but also providing examples of groups of various kinds. Because we are dealing with infinite groups of automorphisms, one important area of research is deriving properties of the automorphism group of a complex from properties of the originating complex.
In this talk, we will discuss three problems related to the theory of convex cones. Namely, i) the isomorphism problem, ii) the homogeneity problem and the iii) self-duality problem. After explaining why one should care about such questions, I will present a few results on those problem. In particular, I will cover recent results on the p-cones and their automorphism groups. This is a joint work with Masaru Ito (Nihon University)
Generalised polygons were first introduced by Jacque Tits in 1959, in the context of studying geometric realisations of the finite simple groups of Lie type. Thus, the study of their symmetry groups and symmetry properties is a rich area of research. My work has focused on studying the point-primitive quadrangles. In my talk I will describe a computer program for testing whether a particular group can act point-primitively on a generalised quadrangle and its application to analysing the almost simple sporadic groups. My work on this program motivated the discovery of a new result dubbed the Line Orbit Lemma, which in turn inspired the conjecturing of the Hemisystem Conjecture, both of which could prove very useful in the analysis of point-primitive quadrangles.
Optimization is often viewed as an active and yet mature research field. However the recent and rapid development in the emerging field of Imaging Sciences has provided a very rich source of new problems as well as big challenges for optimization. Such problems having typically non-smooth and non-convex functionals demand urgent and major improvements on traditional solution methods suitable for convex and differentiable functionals.
This talk presents a limited review of a set of Imaging Models which are investigated by the Liverpool group as well as other groups, out of the huge literature of related works. We start with image restoration models regularised by the total variation and high order regularizers. We then show some results from image registration to align a pair of images which may be in single-modality or multimodality with the latter very much non-trivial. Next we review the variational models for image segmentation. Finally we show some recent attempts to extend our image registration models from more traditional optimization to the Deep Learning framework.
Joint work with recent and current collaborators including D P Zhang, A Theljani, M Roberts, J P Zhang, A Jumaat, T Thompson.
Many interesting objects in the study of the dynamics of complex algebraic varieties are known or conjectured to be transcendental, such as the uniformizing map describing the (complement of a) Julia set, or the Feigenbaum constant. We will discuss various connections between transcendence theory and complex dynamics, focusing on recent developments using transcendence theory to describe the intersection of orbits in algebraic varieties, and the realization of transcendental numbers as measures of dynamical complexity for certain families of maps.
The production of intricate structures at the nanoscale has gone from fantasy to reality in just a few decades. This rapid speed of development in experimental techniques has left a large gap for mathematicians to fill, whether by optimising existing methods or developing predictive tools to lower to cost (both time and money) of experimentation and fabrication. In this seminar I will begin with a background and review including discussing carbon materials and polycyclic aromatic hydrocarbons and their uses; the Lennard-Jones potential and its use as an interatomic potential function to model van der Waals forces and why this potential is relevant to carbon materials; the continuum approximation of the Lennard-Jones potential and its usefulness when modelling intermolecular potentials and lastly an overview of molecular dynamics simulations.
I will then present my preliminary results which begins with a motivating problem of modelling a stack of coronene molecules encapsulated within a single walled carbon nanotube. I will then discuss how we model the system through picking appropriate surfaces and then solving integrals derived from the smaller coronene-coronene and coronene-nanotube interactions, which we then use to build up an analytic expression for the entire system.
Next I will consider the research I will undergo in the near future which includes investigating nonconstant attractive and repulsive coefficients within the Lennard-Jones potential, analysing carbon nanomaterials and other nanostructures use for gas storage, and briefly touch on modelling carbon capture.
Lastly I will go over my research plan including manuscripts I have submitted and aim to submit, conferences and events I have attended and will attend in the coming year, and finally a rough timeframe for the completion of my thesis.