The AMSI Summer School is an exciting opportunity for mathematical sciences students from around Australia to come together over the summer break to develop their skills and networks. Details are available from the 2015 AMSI Summer School website.
Also see the CARMA events page for details of some Summer School seminars, open to all!
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself). [[L], p. 53]
Over the past decade, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities and the growing ease of programming of modern multi-core computing environments [BSC]. But, at least as much, it has been driven by my groups paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
I shall describe diverse work from my group in transcendental number theory (normality of real numbers [AB3]), in dynamic geometry (iterative reflection methods [AB]), probability (behaviour of short random walks [BS, BSWZ]), and matrix completion problems (especially, applied to protein conformation [ABT]). While all of this involved significant numerical-symbolic computation, I shall focus on the visual and experimental components.
AB F. Aragon and J.M. Borwein, ``Global convergence of a non-convex Douglas-Rachford iteration.’’ J. Global Optimization. 57(3) (2013), 753{769. DOI 10.1007/s10898-012-9958-4.
AB3 F. Aragon, D. H. Bailey, J.M. Borwein and P.B. Borwein, Walking on real numbers." Mathematical Intelligencer. 35(1) (2013), 42{60. See also http://walks.carma.newcastle.edu.au/.
ABT F. Aragon, J. M.Borwein, and M. Tam, ``Douglas-Rachford feasibility methods for matrix completion problems.’’ ANZIAM Journal. Galleys June 2014. See also http://carma.newcastle.edu.au/DRmethods/.
BSC J.M. Borwein, M. Skerritt and C. Maitland, ``Computation of a lower bound to Giuga's primality conjecture.’’ Integers 13 (2013). Online Sept 2013 at #A67, http://www.westga.edu/~integers/cgi-bin/get.cgi.
BS J.M. Borwein and A. Straub, ``Mahler measures, short walks and logsine integrals.’’ Theoretical Computer Science. Special issue on Symbolic and Numeric Computation. 479 (1) (2013), 4-21. DOI: http://link.springer.com/article/10.1016/j.tcs.2012.10.025.
BSWZ J.M. Borwein, A. Straub, J. Wan and W. Zudilin (with an Appendix by Don Zagier), ``Densities of short uniform random walks.’’ Can. J. Math. 64 (5), (2012), 961-990. http://dx.doi.org/10.4153/CJM-2011-079-2.
L J.E. Littlewood, A mathematician's miscellany, London: Methuen (1953); Littlewood, J. E. and Bollobas, Bela, ed., Littlewood’s miscellany, Cambridge University Press, 1986.
This talk will highlight links between topics studied in undergraduate mathematics on one hand and frontiers of current research in analysis and symmetry on the other. The approach will be semi-historical and will aim to give an impression of what the research is about.
Fundamental ideas in calculus, such as continuity, differentiation and integration, are first encountered in the setting of functions on the real line. In addition to topological properties of the line, the algebraic properties of being a group and a field, that the set of real numbers possesses, are also important. These properties express symmetries of the set of real numbers, and it turns out that this combination of calculus, algebra and symmetry extends to the setting of functions on locally compact groups, of which the group of rotations of a sphere and the group of automorphisms of a locally finite graph are examples. Not only do these groups frequently occur in applications, but theorems established prior to 1955 show that they are exactly the groups that support integration and differentiation.
Integration and continuity of functions on the circle and the group of rotations of the circle are the basic ingredients for Fourier analysis, which deals with convolution function algebras supported on the circle. Since these basic ingredients extend to locally compact groups, so do the methods of Fourier analysis, and the study of convolution algebras on these groups is known as harmonic analysis. Indeed, there is such a close connection between harmonic analysis and locally compact groups that any locally compact group may be recovered from the convolution algebras that it carries. This fact has recently been exploited with the creation of a theory of `locally compact quantum groups' that axiomatises properties of the algebras appearing in harmonic analysis and does away with the underlying group.
Locally compact groups have a rich structure theory in which significant advances are also currently being made. This theory divides into two cases: when the group is a connected topological space and when it is totally disconnected. The connected case has been well understood since the solution of Hilbert's Fifth Problem in the 1950's, which showed that they are essentially Lie groups. (Lie groups form the symmetries of smooth structures occurring in physics and underpinned, for example, the prediction of the existence of the Higgs boson.) For a long time it was thought that little could be said about totally disconnected groups in general, although important classes of such groups arising in number theory and as automorphism groups of graphs could be understood using techniques special to those classes. However, a complete general theory of these groups is now beginning to take shape following several breakthroughs in recent years. There is the exciting prospect that an understanding of totally disconnected groups matching that of the connected groups will be achieved in the next decade.
In this talk I will discuss a class of systems evolving over two independent variables, which we refer to as "2D". For the class considered, extensions of ODE Lyapunov stability analysis can be made to ensure different forms of stability of the system. In particular, we can describe sufficient conditions for stability in terms of the divergence of a vector Lyapunov function.
People who study geometry like to ask the question: "What is the shape of that?" In this case, the word "that" can refer to a variety of things, from triangles and circles to knots and surfaces to the universe we inhabit and beyond. In this talk, we will examine some of my favourite gems from the world of geometry and see the interplay between geometry, algebra, and theoretical physics. And the only prerequisite you will need is your imagination!
Norman Do is, first and foremost, a self-confessed maths geek! As a high school student, he represented Australia at the International Mathematical Olympiad. He completed a PhD at The University of Melbourne, before working at McGill University in Canada. He is currently a Lecturer and a DECRA Research Fellow in the School of Mathematical Sciences at Monash University.
His research lies at the interface of geometry and mathematical physics, although he is excited by almost any flavour of mathematics. Norman is heavily involved in enrichment for school students, regularly lecturing at the National Mathematics Summer School and currently chairing the Australian Mathematical Olympiad Senior Problems Committee.
This event is run in conjunction with the University of Newcastle’s 50th year anniversary celebrationsSpace! For Star Trek fans it’s the final frontier - with all of vastly, hugely, mindboggling room it contains, it allows scientists and researchers of all persuasions to go where no one has gone before and explore worlds not yet explored. Like Star Trek fans, many mathematicians and statisticians are also interested in exploring the dynamics of space. From a statisticians point of view, our often data driven perspective means we are concerned with exploring data that exists in multi-dimensional space and trying to visualise it using as few dimensions as possible.
This presentation will outline the links between the analysis of categorical data, multi-dimensional space, and the reduction of this space. The technique we explore is correspondence analysis and we shall see how eigen- and singular value decomposition fit into this data visualisation technique. We shall therefore look at some of the fundamental aspects of correspondence analysis and the various ways in which categorical data can be visualised.
I will summarize the main ingredients and results on classical conjugate duality for optimization problems, as given by Rockafellar in 1973.
I shall review convergence results for non-convex Douglas-Rachford iterations.
A look into an extension on the proof of a class of normal numbers by Davenport and Erdos, as well as a leap into the world of experimental mathematics relating to the property of strong normality, in particular the strong normality of some very famous numbers.
Inspired by the Hadamard Maximal Determinant Problem, we investigate the possible Gram matrices from rectangular {+1, -1} matrices. We can fully classify and count the Gram matrices from rectangular {+1, -1} matrices with just two rows and have conjectured a counting formula for the Gram matrices when there are more than two rows in the original matrix.
We build upon the ideas of short random walks in 2 dimensions in an attempt to understand the behaviours of these objects in higher dimensions. We explore the density and moment functions to find combinatorial and analytical results that generalise nicely.
A history of Pi in the American Mathematical Monthly and the variety of approaches to understanding this stubborn constant. I will focus on the common threads of discussion over the last century, especially the changing methods for computing pi to high precision, to illustrate how we have progressed to our current state.
In this talk I will be exploring certain aspects of permutations of length n that avoid the pattern 1324. This is an interesting pattern in that it is simple yet defies simple analysis. It can be shown that there is a growth rate, yet it cannot be shown what that growth rate is; nor has a explicit formula been found to give the number of permutations of length n which avoid the pattern (whereas this has been found for every other non Wilf-equivalent length 4 pattern). Specifically, this talk will look at how an encoding technique (developed by Bona) of the 1324 avoiding permutations was cleverly used to obtain an upper bound for the growth rate of this class.
The fairness of voting systems has been a topic of interest to mathematicians since 1770 when Marquis de Condorcet proposed the Condorcet criterion, and particularly so after 1951 when Kenneth Arrow proposed the Arrow impossibility theorem, which proved that no rank-order voting system can satisfy all properties one would desire.
The system I have been studying is known as runoff voting. It is a method of voting used around the world, often for presidential elections such as in France. Each voter selects their favourite candidate, and if any candidate receives above 50% of the vote, then they are elected. If no one reaches this, then another election will be held, but this time with only the top 2 candidates from the previous election. Whoever receives more votes in this second round will be elected. The runoff voting system satisfies a number of desired properties, though the running of the second round can have significant drawbacks. It can be very costly, it can result in periods of time without government, and in it has been known to cause unrest in some politically unstable countries.
In my research I have introduced the parameter alpha, which varies the original threshold of 50% for a candidate winning the election in the first round. I am using both analytical methods and simulation to observe how the properties change with alpha.
As an extension of Copeland and Erdos' original paper of the same title, we present a clearer and more complete version of the proof that the number of integers up to $N$ ($N$ sufficiently large) which are not $\left(\eps,k\right)$ normal is less than $N^{\gd}$ where $\gd<1$. We also conjecture that the numbers formed from the concatenation of the increasing sequence $a_{1},a_{2},a_{3},\dots$ (provided the sequence is dense enough) are not strongly normal.
We consider the problem of scattering of waves by a string with attached masses, focussing on the problem in the time-domain. We propose this as a simple model for more complicated wave scattering problems which arise in the study of elastic metamaterials. We present the governing system of equations and show how we have solved them. Some numerical simulations are also presented.
The pooling problem is a nonlinear program (NLP) with applications in the refining and petrochemical industries, but also in mining. While it has been shown that the pooling problem is strongly NP-hard, it is one of the most promising NLPs to be solved to global optimality. In this talk I will discuss strengths and weaknesses of problem formulations and solution techniques. In particular, I will discuss convex and linear relaxations of the pooling problem, and show how they are related to graph theory, polyhedral theory and combinatorial optimization.
The Fourier Transform is a central and powerful tool in signal processing as well as being essential to Complex Analysis. However, it is limited to acting on complex-valued functions and thus cannot be applied directly to colour images (which have 3 real values worth of output). In this talk, I discuss the limitations of current methods then discuss several methods of extending the Fourier Transform to larger algebras (specifically the Quaternions and Clifford algebras). This informs a research plan involving the study and computer implementation of a particular Clifford Fourier Transform.
In this conference accessible to a large public and particularly to students, we will review the most important contributions of Leonhard Euler in mathematics. We will give a brief biography of Leonhard Euler and a broad survey of his greats achievements.
Random walks have been used to model stochastic processes in many scientific fields. I will introduce invariant random walks on groups, where the transition probabilities are given by a probability measure. The Poisson boundary will also be discussed. It is a space associated with every group random walk that encapsulates the behaviour of the walks at infinity and gives a description of certain harmonic functions on the group in terms of the essentially bounded functions on the boundary. I will conclude with a discussion of project aims, namely to compute the boundary for certain random walks in new cases and to investigate the order structure of certain ideals in $L^1(G)$ defined for each invariant random walk.
Supervisors: Prof. George Willis, Dr Jeff Hogan.
Power domination problem is a variant of the famous domination problem. It has its application in monitoring of electric power networks. In this talk, we give a literature review of the work done so far and the possible open areas of research. We also introduce two interesting variants of power domination– resolving power domination problem and the propagation problem. We present preliminary work and research plan for future.
Supervisors: Prof. Mirka Miller, Dr Joe Ryan, Prof. Paul D Manuel.
I will lecture on 32 proofs of a theorem of Euler posed by mistake by Goldbach regarding Zeta(3). See http://www.carma.newcastle.edu.au/jon/goldbach-talk10.pdf.
A look into infinity, a few famous problems, and a little bit of normality.
It is well known that there is a one-to-one correspondence between signed plane graphs and link diagrams via the medial construction. The relationship was once used in knot tabulations in the early time of study in knot theory. Indeed, it provides a method of studying links using graphs. Let $G$ be a plane graph. Let $D(G)$ be the alternating link diagram corresponding to the (positive) $G$ obtained from the medial construction. A state $S$ of $D$ is a labeling each crossing of $D$ by either $A$ or $B$. Making the corresponding split for each crossing gives a number of disjoint embedded closed circles, called state circles. We call a state which possesses maximum number of state circles a maximum state. The maximum state is closely related to the genus of the corresponding link, thus has been studied. In this talk, we will discuss some of the recent progress we have made on this topic.
When attacking various difficult problems in the field of Diophantine approximation the application of certain topological games has proven extremely fruitful in recent times due to the amenable properties of the associated 'winning' sets. Other problems in Diophantine approximation have recently been solved via the method of constructing certain tree-like structures inside the Diophantine set of interest. In this talk I will discuss how one broad method of tree-like construction, namely the class of 'generalised Cantor sets', can be formalized for use in a wide variety of problems. By introducing a further class of so-called 'Cantor-winning' sets we may then provide a criterion for arbitrary sets in a metric space to satisfy the desirable properties usually attributed to winning sets, and so in some sense unify the two above approaches. Applications of this new framework include new answers to questions relating to the mixed Littlewood conjecture and the $\times2, \times3$ problem. The talk will be aimed at a broad audience.
This is joint work with our former honours student Alex Muir. We look at the variety of lengths of cycles in Cayley graphs on generalized dihedral groups.
Consider a function from the circle to itself such that the derivative is greater than one at every point. Examples are maps of the form f(x) = mx for integers m > 1. In some sense, these are the only possible examples. This fact and the corresponding question for maps on higher dimensional manifolds was a major motivation for Gromov to develop pioneering results in the field of geometric group theory.
In this talk, I'll give an overview of this and other results relating dynamical systems to the geometry of the manifolds on which they act and (time permitting) talk about my own work in the area.
In celebration of both a special "big" pi Day (3/14/15) and the 2015 centennial of the Mathematical Association of America, we review the illustrious history of the constant $\pi$ in the pages of the American Mathematical Monthly.
This talk showcases some large numbers and where they came from.
A mixed formulation for a Tresca frictional contact problem in linear elasticity is considered in the context of boundary integral equations, which is later extended to Coulomb friction. The discrete Lagrange multiplier, an approximation of the surface traction on the contact boundary part, is a linear combination of biorthogonal basis functions. The biorthogonality allows to rewrite the variational inequality constraints as a simple set of complementary problems. Thus, enabling an efficient application of a semi-smooth Newton solver for the discrete mixed problems. Typically, the solution of frictional contact problems is of reduced regularity at the interface between contact to non-contact and from stick to slip. To identify the a priori unknown locations of these interfaces a posteriori error estimations of residual and hierarchical type are introduced. For a stabilized version of our mixed formulation (with the Poincare- Steklov operator) we present also a priori estimates for the solution. Numerical results show the applicability of the error estimators and the superiority of hp-adaptivity compared to low order uniform and adaptive approaches.
Ernst Stephan is a visitor of Bishnu Lamichhane.
This is joint work with our former honours student Alex Muir. We look at the variety of lengths of cycles in Cayley graphs on generalized dihedral groups.
This week I shall conclude my discussion of pancyclicity and Cayley graphs on generalized dihedral groups.
This presentation will explore the specificities of teaching mathematics in engineering studies that transcend the division between technical, scientific and design disciplines and how students of such studies are different from traditional engineering students. Data comes from a study at the Media Technology Department of Aalborg University in Copenhagen, Denmark. Media Technology is an education that combines technology and creativity and looks at the technology behind areas such as advanced computer graphics, games, electronic music, animations, interactive art and entertainment, to name a few. During the span of the education students are given a strong technical foundation, both in theory and in practice.
The presentation emerges from research of my PhD student Evangelia Triantafylloyu and myself. The study which will be presented here used performance tests, attitude questionnaires, interviews with students and observations of mathematics related courses. The study focused on investigating student performance and retention in mathematics, attitudes towards mathematics, and preferences of teaching and learning methods, including flipped classroom approach using videos produced by course teachers. The outcome of this study can be used to create a profile of a typical student and to tailor approaches for teaching mathematics to this discipline. Moreover, it can be used as a reference point for investigating ways to improve mathematics education in other creative engineering studies.
About the Speaker: Olga Timcenko joined Medialogy department of Aalborg University in Copenhagen in fall 2006, as an Associate Professor. Before joining the University, she was a Senior Technology Consultant in LEGO Business development, LEGO Systems A/S, where she worked for different departments of LEGO on research and development of multimedia materials for children, including LEGO Digital Designer and LEGO Mindstorms NXT. She was active in FIRST LEGO League project (world-wide robotic competition among school children), and Computer-clubhouse project. During 2003-2006, she was LEGO’s team leader in EU-financed Network of excellence in Technology enhanced learning called Kaleidoscope, and actively participated in several Kaleidoscope JERIPs and SIGs. She has a Ph.D. in Robotics from Suddansk University in Odense, Denmark and she is author / co-author of 40+ conference and journal papers in the field of robotics and children and technology, and 4 international patents in the field of virtual 3D worlds / 3D user interfaces for children. Her last project for LEGO was redesign of the Mindstorms iconic programming language for children (the product was launched world-wide in August 2006).
I will discuss a combinatorial problem coming from database design. The problem can be interpreted as maximizing the number of edges in a certain hypergraph subject to a recoverability condition. It was solved recently by the high school student Max Aehle, who came up with a nice argument using the polynomial method.
Dengue is caused by four different serotypes, where individuals infected by one of the serotypes obtain lifelong immunity to that serotype but not for the other serotypes. Individuals with secondary infections may attract the more dangerous form of dengue, called dengue hemorrhagic fever (DHF), because of higher viral load. Because of unsustainability of traditional measures, the use of the bacterium Wolbachia has been proposed as an alternative strategy against dengue fever. However, little research has been conducted to study the effectiveness of this intervention in the field. Understanding the effectiveness of this intervention is of importance before it is widely implemented in the real-world. In this talk, I will explain the effectiveness of this intervention and present mathematical models that I have developed to study the effectiveness of this intervention and how these models are different to the existing one. I will also present the effects of the presence of multiple strains of dengue on dengue transmission dynamics.
Supervisors: David Allingham, Roslyn Hickson (IBM), Kathryn Glass (ANU), Irene Hudson
We will talk on the validity of the mean ergodic theorem along left Følner sequences in a countable amenable group G. Although the weak ergodic theorem always holds along any left Følner sequence in G, we will provide examples where the mean ergodic theorem fails in quite dramatic ways. On the other hand, if G does not admit any ICC quotients, e.g. if G is virtually nilpotent, then we will prove that the mean ergodic theorem does indeed hold along any left Følner sequence.
Based on the joint work with M. Bjorklund (Chalmers).
We introduce a subfamily of additive enlargements of a maximally monotone operator $T$. Our definition is inspired by the seminal
work of Fitzpatrick presented in 1988. These enlargements are a subfamily of the family of enlargements introduced by Svaiter in 2000. For the case $T = \partial f$, we prove that some members of the subfamily are smaller than the $\varepsilon$-subdifferential enlargement. For this choice of $T$, we can construct a specific enlargement which coincides with the$\varepsilon$-subdifferential. Since these enlargements are all
additive, they can be seen as structurally closer to the $\varepsilon$-subdifferential enlargement.
Joint work with Juan Enrique Martínez-Legaz (Universitat Autonoma de Barcelona), Mahboubeh Rezaei (University of Isfahan, Iran), and Michel Théra (University of Limoges).
I will explain what an equation in a free group is, why they are interesting, and how to solve them. The talk will be accessible to anyone interested in maths or computer science or logic.
I have recently [2] shown that each group $Z_2^{2m}$ gives rise to a pair of bent functions with disjoint support, whose Cayley graphs are a disjoint pair of strongly regular graphs $\Delta_m[-1]$, $\Delta_m[1]$ on $4^m$ vertices. The two strongly regular graphs are twins in the sense that they have the same parameters $(\nu, k, \lambda, \mu)$. For $m < 4$, the two strongly regular graphs are isomorphic. For $m \geq 4$, they are not isomorphic, because the size of the largest clique differs. In particular, the largest clique size of $\Delta_m[-1]$ is $\rho(2^m)$ and the largest clique in $\Delta_m[1]$ has size at least $2^m$, where $\rho(n)$ is the Hurwitz-Radon function. This non-isomorphism result disproves a number of conjectures that I made in a paper on constructions of Hadamard matrices [1].
[1] Paul Leopardi, "Constructions for Hadamard matrices, Clifford algebras, and their relation to amicability - anti-amicability graphs", Australasian Journal of Combinatorics, Volume 58(2) (2014), pp. 214–248.
[2] Paul Leopardi, "Twin bent functions and Clifford algebras", accepted 13 January 2015 by the Springer Proceedings in Mathematics and Statistics (PROMS): Algebraic Design Theory and Hadamard Matrices (ADTHM 2014).
Supervisor: Murray Elder
Supervisor: Mike Meylan
Supervisor: Wadim Zudulin
In this talk I will present the main results of my PhD thesis (by the same name), which focuses on the application of matrix determinants as a means of producing number-theoretic results.
Motivated by an investigation of properties of the Riemann zeta function, we examine the growth rate of certain determinants of zeta values. We begin with a generalisation of determinants based on the Hurwitz zeta function, where we describe the arithmetic properties of its denominator and establish an asymptotic bound. We later employ a determinant identity to bound the growth of positive Hankel determinants. Noting the positivity of determinants of Dirichlet series allows us to prove specific bounds on determinants of zeta values in particular, and of Dirichlet series in general. Our results are shown to be the best that can be obtained from our method of bounding, and we conjecture a slight improvement could be obtained from an adjustment to our specific approach.
Within the course of this investigation we also consider possible geometric properties which are necessary for the positivity of Hankel determinants, and we examine the role of Hankel determinants in irrationality proofs via their connection with Padé approximation.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
The seminar will provide a brief overview of the potential for CT to contribute to quantitative analysis in the Social Sciences. This will be followed by a description of CT as a "Rosetta Stone" linking topology, algebra, computation, and physics together. This carries over to process thinking and circuit analysis. Coecke and Parquette's approach to diagrammatic analysis is examined to emphasize the efficiency of block shifting techniques over diagram chasing. Baez and Erbele's application of CT to feedback control is the main focus of analysis and this is followed by a brief excursion into multicategories (cobordisms), before finishing up with some material on coalgebras and transition systems.
Have you ever tried to add up the numbers 1+1/2+1/3+...? If you've never thought about this before, then give it a go (and don't Google the answer!) In this talk we will settle this relatively easy question and consider how things might change if we try to thin out the sum a bit. For instance, what if we only used the prime numbers 1/2+1/3+1/5+...? Or what about the square numbers 1+1/4+1/9+...? There will be some algebra and integration at times, but if you can add fractions (or use a calculator) then you should follow almost everything.
Starting with a substitution tiling, such as the Penrose tiling, we demonstrate a method for constructing infinitely many new substitution tilings. Each of these new tilings is derived from a graph iterated function system and the tiles typically have fractal boundary. As an application of fractal tilings, we construct an odd spectral triple on a C*-algebra associated with an aperiodic substitution tiling. Even though spectral triples on substitution tilings have been extremely well studied in the last 25 years, our construction produces the first truly noncommutative spectral triple associated with a tiling. My work on fractal substitution tilings is joint with Natalie Frank and Sam Webster, and my work on spectral triples is joint with Michael Mampusti.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
In this colloquium-style presentation I will describe these combinatorial objects and how they relate to each other. Time permitting, I will also show how they can be used in other areas of Mathematics. Joint work with Sooran Kang and Samuel Webster.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
Arising originally from the analysis of a family of compressed sensing matrices, Ian Wanless and I recently investigated a number of linear algebra problems involving complex Hadamard matrices. I will discuss our main result, which relates rank-one submatrices of Hadamard matrices to the number of non-zero terms in a representation of a fixed vector with respect to two unbiased bases of a finite dimensional vector space. Only a basic knowledge of linear algebra will be assumed.
Advantages of EEG in studying brain signals include excellent temporal localization and, potentially, good spatial localization, given good models for source localization in the brain. Phase synchrony and cross-frequency coupling are two phenomena believed to indicate cooperation of different brain regions in cognition through messaging via different frequency bands. To verify these hypotheses requires ability to extract time-frequency localized components from complex multicomponent EEG data. One such method, empirical mode decompositions, shows increasing promise through engineering and we will review recent progress on this approach. Another potential method uses bases or frames of optimally time-frequency localized signals, so-called prolate spheroidal wave functions. New properties of these functions developed in joint work with Jeff Hogan will be reviewed and potential applications to EEG will be discussed.
This will be an informal talk from our UoN Engineering colleague Prof Bill McBride who recently visited some "Mid-West" Universities in the USA. Prof McBride will discuss what he saw and learnt, with reference to first year maths teaching for Engineering students.
Managing railway in general and high speed rail in particular is a very complex task which involves many different interrelated decisions in all three strategic, tactical, and operational phases. In this research two different mixed integer linear programing models are presented which are the literature's first models of their kind. In the first model a single line with two different train types is considered. In the second model a cyclic train timetabling and platforming assignment problems are considered and solved to optimality. For this model, methods for obtaining bounds on the first objective function are presented. Some pre-processing techniques to reduce the number of decision variables and constraints are also proposed. Proposed models' objectives are to minimize (1) the cyclic length, called Interval, and (2) the total journey time of all trains dispatched from their origin in each cycle. Here we explicitly consider the minimization of the cycle length using linear constraints and linear objective function. The proposed models are different from and faster than the widely-used Period Event Scheduling Problem (PESP).
In recent years, there has been quite a bit of interest in generalized Fourier transforms in Clifford analysis and in particular for the so-called Clifford-Fourier transform.
In the first part of the talk I will provide some motivation for the study of this transform. In the second part we will develop a new technique to find a closed formula for its integral kernel, based on the familiar Laplace transform. As a bonus, this yields a compact and elegant formula for the generating function of all even dimensional kernels.
I'll give an overview of some recent developments in the theory of groups of automorphisms of trees which are discrete in the full automorphism group of the tree and are locally-transitive. I'll also mention some questions which have been provoked by this work.
We generalize the Burger-Mozes universal groups acting on regular trees by prescribing the local action on balls of a given radius, and study the basic properties of this construction. We then apply our results to prove a weak version of the Goldschmidt-Sims conjecture for certain classes of primitive permutation groups.
We study maximal monotone inclusions from the perspective of (convex) gap functions.
We propose a very natural gap function and will demonstrate how this function arises from the Fitzpatrick function — a convex function used effectively to represent maximal monotone operators.
This approach allows us to use the powerful strong Fitzpatrick inequality to analyse solutions of the inclusion.
This is joint work with Joydeep Dutta.
Functions that are piecewise defined are a common sight in mathematics while convexity is a property especially desired in optimization. Suppose now a piecewise-defined function is convex on each of its defining components – when can we conclude that the entire function is convex? Our main result provides sufficient conditions for a piecewise-defined function f to be convex. We also provide a sufficient condition for checking the convexity of a piecewise linear-quadratic function, which play an important role in computer-aided convex analysis.
Based on joint work with Heinz H. Bauschke (Mathematics, UBC Okanagan) and Hung M. Phan (Mathematics, University of Massachusetts Lowell).
Mathematicians sometimes speak of the beauty of mathematics which to us is reflected in proofs and solutions for the most part. I am going to give a few proofs that I find very nice. This is stuff that post-grad discrete students certainly should know exists.
This completion talk is in two parts. In the first part, I will present a characterisation of the cyclic Douglas-Rachford method's behaviour, generalising a result which was presented in my confirmation seminar. In the second part, I will explore non-convex regularity notions in an application arising in biochemistry.
Amenability is of interest for many reasons, not least of which is its paradoxical decomposition into so many various characterisations, each equal to the whole. Two of these are the characterisation in terms of the cogrowth rate, and the existence of a Følner sequence. In exploring a known method of computing the cogrowth rate using a random walk, and by analyzing which groups seem to be pathological for this algorithm, we discover new connections between these properties.
Partitioning is a basic fundamental technique in graph theory. Graph partitioning technique is used widely to solve several combinatorial problems. We will discuss the role of edge partitioning techniques on graph embedding. The graph embedding includes some combinatorial problems such as bandwidth problem, wirelength problem, forwarding index problem etc and in addition includes some cheminformatics problems such as Wiener Index, Szeged Index, PI index etc. In this seminar, we study convex partition and its characterization. In addition, we also analyze the relationship between convex partition and some other edge partitions such as Szeged edge partition and channel edge partition. The graphs that induce convex partitions are bipartite. We will discuss the difficulties in extending this technique to non-bipartite graphs.
In either the inviscid limit of the Euler equations, or the viscously dominated limit of the Stokes equations, the determination of fluid flows can be reduced to solving singular integral equations on immersed structures and bounding surfaces. Further dimensional reduction is achieved using asymptotics when these structures are sheets or slender fibers. These reductions in dimension, and the convolutional second-kind structure of the integral equations, allows for very efficient and accurate simulations of complex fluid-structure interaction problems using solvers based on the Fast Multipole or related methods. These representations also give a natural setting for developing implicit time-stepping methods for the stiff dynamics of elastic structures moving in fluids. I'll discuss these integral formulations, their numerical treatment, and application to simulating structures moving in high-speed flows (flapping flags and flyers), and for resolving the complex interactions of many, possibly flexible, bodies moving in microscopic biological flows.
Existing of perfect matchings in regular graph is a fundamental problem in graph theory, and it closely model many real world problems such as broadcasting and network management. Recently, we have studied the number of edge disjoint perfect matching in regular graph, and using some well-known results on the existence of perfect matching and operations forcing unique perfect matchings in regular graph, we are able to make some pleasant progress. In this talk, we will present the new results and briefly discuss the proof.
Stability analysis plays a central role in nonlinear control and systems theory. Stability is, in fact, the fundamental requirement for all the practical control systems. In this research, advanced stability analysis techniques are reviewed and developed for discrete-time dynamical systems. In particular, we study the relationships between the input-to-state stability related properties and l¬2-type stability properties. These considerations naturally lead to the study of input-output models and, further, to questions of incremental stability and convergent dynamics. Future work will also outline several applications scenario for our theories including observer analysis and secure communication.
Supervisors: A/Prof. Christopher Kellett and Dr. Björn Rüffer
Noga Alon's Combinatorial Nullstellensatz, published in 1999, is a statement about polynomials in many variables and what happens if one of these vanishes over the set of common zeros of some others. In contrast to Hilbert's Nullstellensatz, it makes strong assumptions about the polynomials it is talking about, and this leads a tool for producing short and elegant proofs for numerous old and new results in combinatorial number theory and graph theory. I will present the proof of the algebraic result and some of the combinatorial applications in the 1999 paper.
After briefly describing a few more simple applications of Alon's Nullstellensatz, I will present in detail Reiher's amazing proof of the Kemnitz conjecture regarding lattice points in the plane.
Supervisors: Mirka Miller, Joe Ryan and Andrea Semanicova-Fenovcikova
We give some background to the labeling schemes like graceful, harmonious, magic, antimagic and irregular total labeling. Then we will describe why study of graph labeling is important by narrating some applications of graph labeling. Next we will briefly describe the methodology like Robert's construction to obtain completing separating systems (CSS) which will help us to determine the antimagic labeling of graphs and Alon's Combinatorial Nullstellensatz. We will illustrate an example from many applications of graphs labelling. Finally we will introduce reflexive irregular total labelling and explain its importance. To conclude, we add research plan and time line during candidature of research.
I will complete the proof of the Kemnitz conjecture and make some remarks about related zero-sum problems.
In this talk we will begin with a brief history of the mathematics of aperiodic tilings of Euclidean space, highlighting their relevance to the theory of quasicrystals. Next we will focus on an important collection of point sets, cut and project sets, which come from a dynamical construction and provide us with a mathematical model for quasicrystals. After giving definitions and examples of these sets, we will discuss their relationship with Diophantine approximation, and show how the interplay between these two subjects has recently led to new results in both of them.
Lift-and-Project operators (which map compact convex sets to compact convex sets in a certain contractive way, via higher dimensional convex representations of these sets) provide an automatic way for constructing all facets of the convex hull of 0,1 vectors in a polytope given by linear or polynomial inequalities. They also yield tractable approximations provided that the input polytope is tractable and that we only apply the operators O(1) times. There are many generalizations of the theory of these operators which can be used, in theory, to generate (eventually, in the limit) arbitrarily tight, convex relaxations of essentially arbitrary nonconvex sets. Moreover, Lift-and-Project methods provide universal ways of applying Semidefinite Programming techniques to Combinatorial Optimization problems, and in general, to nonconvex optimization problems.
I will survey some of the developments (some recent, some not so recent) that I have been involved in, especially those utilizing Lift-and-Project methods and Semidefinite Optimization. I will touch upon the connections to Convex Algebraic Geometry and present various open problems.
We propose new path-following predictor-corrector algorithms for solving convex optimization problems in conic form. The main structural properties used in our design and analysis of the algorithms hinge on some key properties of a special class of very smooth, strictly convex barrier functions. Even though our analysis has primal and dual components, our algorithms work with the dual iterates only, in the dual space. Our algorithms converge globally at the same worst-case rate as the current best polynomial-time interior-point methods. In addition, our algorithm have the local superlinear convergence property under some mild assumptions. The algorithms are based on an easily computable gradient proximity measure, which ensures an automatic transformation of the global linear rate of convergence to the locally superlinear one under some mild assumptions. Our step-size procedure for the predictor step is related to the maximum step size (the one that takes us to the boundary).
This talk is based on joint work with Yu. Nesterov.
We survey the literature on orthogonal polynomials in several variables starting from Hermite's work in the late 19th century to the works of Zernike (1920's) and Ito (1950's). We explore combinatorial and analytic properties of the Ito polynomials and offer a general class in 2 dimensions which as interesting structural properties. Connections with certain PDE's will be mentioned.
Given a finite presentation of a group, proving properties of the group can be difficult. Indeed, many questions about finitely presented groups are unsolvable in general. Algorithms exist for answering some questions while for other questions algorithms exist for verifying the truth of positive answers. An important tool in this regard is the Todd-Coxeter coset enumeration procedure. It is possible to extract formal proofs from the internal working of coset enumerations. We give examples of how this works, and show how the proofs produced can be mechanically verified and how they can be converted to alternative forms. We discuss these automatically produced proofs in terms of their size and the insights they offer. We compare them to hand proofs and to the simplest possible proofs. We point out that this technique has been used to help solve a longstanding conjecture about an infinite class of finitely presented groups.
In scanning ptychography, an unknown specimen is illuminated by a localised illumination function resulting in an exit-wave whose intensity is observed in the far-field. A ptychography dataset is a series of these observations, each of which is obtained by shifting the illumination function to a different position relative to the specimen with neighbouring illumination regions overlapping. Given a ptychographic data set, the blind ptychography problem is to simultaneously reconstruct the specimen, illumination function, and relative phase of the exit-wave. In this talk I will discuss an optimisation framework which reveals current state-of-the-art reconstruction methods in ptychography as (non-convex) alternating minimization-type algorithms. Within this framework, we provide a proof of global convergence to critical points using the Kurdyka-Łojasiewicz property.
We use random walks to experimentally compute the first few terms of the cogrowth series for a finitely presented group. We propose candidates for the amenable radical of any non-amenable group, and a Følner sequence for any amenable group, based on convergence properties of random walks.
The Hardy and Paley-Wiener Spaces are defined due to important structural theorems relating the support of a function's Fourier transform to the growth rate of the analytic extension of a function. In this talk we show that analogues of these spaces exist for Clifford-valued functions in n dimensions, using the Clifford-Fourier Transform of Brackx et al and the monogenic ($n+1$ dimensional) extension of these functions.
We consider monotone systems defined by ODEs on the positive orthant in $\mathbb{R}^n$. These systems appear in various areas of application, and we will discuss in some level of detail one of these applications related to large-scale systems stability analysis.
Lyapunov functions are frequently used in stability analysis of dynamical systems. For monotone systems so called sum- and max-separable Lyapunov functions have proven very successful. One can be written as a sum, the other as a maximum of functions of scalar arguments.
We will discuss several constructive existence results for both types of Lyapunov function. To some degree, these functions can be associated with left- and right eigenvectors of an appropriate mapping. However, and perhaps surprisingly, examples will demonstrate that stable systems may admit only one or even neither type of separable Lyapunov function.
A motion which is periodic may be considered symmetric under a transformation in time. A measure of the phase relationship these motions have with respect to a geometric figure which is symmetric under some transformation in space is presented. The implications this has on discretised patterns generated is discussed. The talk focuses on theoretical formalisms, such as those which display the fractal patterns of 'strange attractors', rather than group theory for symmetric transformations.
We study the family of self-inversive polynomials of degree $n$ whose $j$th coefficient is $\gcd(n,j)^k$, for a fixed integer $k \geq 1$. We prove that these polynomials have all of their roots on the unit circle, with uniform angular distribution. In the process we prove some new results on Jordan's totient function. We also show that these polynomials are irreducible, apart from an obvious linear factor, whenever $n$ is a power of a prime, and conjecture that this holds for all $n$. Finally we use some of these methods to obtain general results on the zero distribution of self-inversive polynomials and of their "duals" obtained from the discrete Fourier transforms of the coefficients sequence. (Joint work with Sinai Robins).
I will talk a bit about the benefits of a regular outlook.
We discuss problems of approximation of an irrational by rationals whose numerators and denominators lie in prescribed arithmetic progressions. Results are both, on the one hand, from a metrical and a non-metrical point of view, and on the other, from an asymptotic and also a uniform point of view. The principal novelty of this theory is a Khintchine-type theorem for uniform approximation in this setup. Time permitting some applications of this work will be discussed.
A dimension adaptive algorithm for sparse grid quadrature in reproducing kernel Hilbert spaces on products of spheres uses a greedy algorithm to approximately solve a down-set constrained binary knapsack problem. The talk will describe the quadrature problem, the knapsack problem and the algorithm, and will include some numerical examples.
We will be answering the following question raised by Christopher Bishop:
'Suppose we stand in a forest with tree trunks of radius $r > 0$ and no two trees centered closer than unit distance apart. Can the trees be arranged so that we can never see further than some distance $V < \infty$, no matter where we stand and what direction we look in? What is the size of $V$ in terms of $r$?'
The methods used to study this problem involve Fourier analysis and sharp estimates of exponential sums.
We consider the stability of a class of abstract positive systems originating from the recurrence analysis of stochastic systems, such as multiclass queueing networks and semimartingale reflected Brownian motions. We outline that this class of systems can also be described by differential inclusions in a natural way. We will point out that because of the positivity of the systems the set-valued map defining the differential inclusion is not upper semicontinuous in general and, thus, well-known characterizations of asymptotic stability in terms of the existence of a (smooth) Lyapunov function cannot be applied to this class of positive systems. Following an abstract approach, based on common properties of the positive systems under consideration, we show that asymptotic stability is equivalent to the existence of a Lyapunov function. Moreover, we examine the existence of smooth Lyapunov functions. Putting an assumption on the trajectories of the positive systems which demands for any trajectory the existence of a neighboring trajectory such that their difference grows linearly in time and distance of the starting points, we prove the existence of a $C^\infty$-smooth Lyapunov function. Looking at this hypothesis from the differential inclusions perspective it turns out that differential inclusions defined by Lipschitz continuous set-valued maps taking nonempty, compact and convex values have this property.
We consider identities satisfied by discrete analogues of Mehta-like integrals. The integrals are related to Selberg’s integral and the Macdonald conjectures. Our discrete analogues have the form
$$S_{\alpha,\beta,\delta} (r,n) := \sum_{k_1,...,k_r\in\mathbb{Z}} \prod_{1\leq i < j\leq r} |k_i^\alpha - k_j^\alpha|^\beta \prod_{j=1}^r |k_j|^\delta \binom{2n}{n+k_j},$$where $\alpha,\beta,\delta,r,n$ are non-negative integers subject to certain restrictions.
In the cases that we consider, it is possible to express $S_{\alpha,\beta,\delta} (r,n)$ as a product of Gamma functions and simple functions such as powers of two. For example, if $1 \leq r \leq n$, then $$S_{2,2,3} (r,n) = \prod_{j=1}^r \frac{(2n)!j!^2}{(n-j)!^2}.$$
The emphasis of the talk will be on how such identities can be obtained, with a high degree of certainty, using numerical computation. In other cases the existence of such identities can be ruled out, again with a high degree of certainty. We shall not give any proofs in detail, but will outline the ideas behind some of our proofs. These involve $q$-series identities and arguments based on non-intersecting lattice paths.
This is joint work with Christian Krattenthaler and Ole Warnaar.
The use of GPUs for scientific computation has undergone phenomenal growth over the past decade, as hardware originally designed with limited instruction sets for image generation and processing has become fully programmable and massively parallel. This talk discusses the classes of problem that can be attacked with such tools, as well as some practical aspects of implementation. A direction for future research by the speaker is also discussed.
I am going to discuss a construction of functional calculus $$f\mapsto f(A,B),$$ where $A$ and $B$ are noncommuting self-adjoint operators. I am going to discuss the problem of estimating the norms $\|f(A_1,B_1)-f(A_2,B_2)\|$, where the pair $(A_2,B_2)$ is a perturbation of the pair $(A_1,B_1)$.
We'll answer the question "What's a wavelet?" and discuss continuous wavelet transforms on the line and connections with representation theory and singular integrals. The focus will then turn to discretization techniques, including multiresolution analysis. Matrix completion problems arising from higher-dimensional wavelet constructions will also be described.
Firstly, from [1] we consider a mixed formulation for an elliptic obstacle problem for a 2nd order operator and present an hp-FE interior penalty discontinous Galerkin (IPDG) method. The primal variable is approximated by a linear combination of Gauss-Lobatto-Lagrange(GLL)-basis functions, whereas the discrete Lagrangian multiplier is a linear combination of biorthogonal basis functions. A residual based a posteriori error estimate is derived. For its construction the approximation error is split into a discretization error of a linear variational equality problem and additional consistency and obstacle condition terms.
Secondly, an hp-adaptive $C^0$-interior penalty method for the bi-Laplace obstacle problem is presented from [2]. Again we take a mixed formulation using GLL-basis functions for the primal variable and biorthogonal basis functions for the Lagrangian multiplier and present also a residual a posteriori error estimate. For both cases (2nd and 4th order obstacle problems) our numerical experiments clearly demonstrate the superior convergence of the hp-adaptive schemes compared with uniform and h-adaptive schemes.
References
[1] L.Banz, E.P.Stephan, A posteriori error estimates of hp-adaptive IPDG-FEM for elliptic
obstacle problems, Applied Numerical Mathematics 76,(2014) 76-92
[2] L.Banz, B.P.Lamichhane, E.P.Stephan, An hp-adaptive $C^0$-interior penalty method for the
obstacle problem of clamped Kirchhoff plates, preprint (2015)
(Joint work with Lothar Banz, University Salzburg, Austria)
At the 1987 Ramanujan Centenary meeting Dyson asked for a coherent group-theoretical structure for Ramanujan's mock theta functions analogous to Hecke's theory of modular forms. We extend the work of Bringmann and Ono, and Ahlgren and Treneer on answering this question.