I'll discuss the analytic solution to the limit shape problem for random domino tilings and "lozenge" tilings, and in particular try to explain how these limiting surfaces develop facets.
It is now known for a number of models of statistical physics in two dimensions (such as percolation or the Ising model) that at their critical point, they do behave in a conformally invariant way in the large-scale limit, and do give rise in this limit to random fractals that can be mathematically described via Schramm's Stochastic Loewner Evolutions.
The goal of the present talk will be to discuss some aspects of what remains valid or should remain valid about such models and their conformal invariance, when one looks at them within a fractal-type planar domain. We shall in particular describe (and characterize) a continuous percolation interface within certain particular random fractal carpets. Part of this talk will be based on joint work with Jason Miller and Scott Sheffield.
Liz will talk about how the UoN could make more use of the flipped classroom. The flipped classroom is an approach where content is provided in advance to students and instead of the traditional lecture the time is spent interacting with students through worked examples etc.
Liz will examine impacts on student learning, but also consider how to make this approach manageable to staff workloads and how lecture theatres design can be altered to facilitate this new way of learning.
Polyhedral links, interlinked and interlocked architectures, have been proposed for the description and characterization of DNA and protein polyhedra. Chirality is a very important feature for biomacromolecules. In this talk, we discuss the topological chirality of a type of DNA polyhedral links constructed by the strategy of "n-point stars and a type of protein polyhedral links constructed by "three-cross curves" covering. We shall ignore DNA sequence and use the orientation of the 2 backbone strands of the dsDNA to orient DNA polyhedral links, thus consider DNA polyhedral links as oriented links with antiparallel orientations. We shall ignore protein sequence and view protein polyhedral links as unoriented ones. It is well known that there is a correspondence between alternating links and plane graphs. We prove that links corresponding to bipartite plane graphs have antiparallel orientations, and under these orientations, their writhes are not zero. As a result, the type of right-handed double crossover 4-turn DNA polyhedral links are topologically chiral. We also prove that the unoriented link corresponding to a connected, even, bipartite plane graph has self-writhe 0 and using the Jones polynomial we present a criterion for chirality of unoriented alternating links with self-writhe 0. By applying this criterion we obtain that 3-regular protein polyhedral links are also topologically chiral. Topological chirality always implies chemical chirality, hence the corresponding DNA and protein polyhedra are all chemically chiral. Our chiral criteria may be used to detect the topological chirality of more complicated DNA and protein polyhedral links to be synthesized by chemists and biologists in the future.
Jonathan Kress of UNSW will be talking about the UNSW experience of using MapleTA for online assignments in Mathematics over an extended period of time.
Ben Carter will be talking about some of the rationale for online assignments, how we're using MapleTA here, and our hopes for the future, including how we might use it as a basis for a flipped classroom approach to some of our teaching.
This talk will give an introduction to the Kelper-Coulomb and harmonic oscillator systems fundamental in both the classical and quantum worlds. These systems are related by "coupling constant metamorphosis", a remarkable trick that exchanges the energy of one system with the coupling constant of the other. The trick can be seen to be a type of conformal transformation, that is, a scaling of the underlying metric, that maps "conformal symmetries" to "true symmetries" of a Hamiltonian system.
In this talk I will explain the explain the statements above and discuss some applications of coupling constant metamorphosis to superintegrable systems and differential geometry.
A lattice rule with a randomly-shifted lattice estimates a mathematical expectation, written as an integral over the s-dimensional unit hypercube, by the average of n evaluations of the integrand, at the n points of the shifted lattice that lie inside the unit hypercube. This average provides an unbiased estimator of the integral and, under appropriate smoothness conditions on the integrand, it has been shown to converge faster as a function of n than the average at n independent random points (the standard Monte Carlo estimator). In this talk, we study the behavior of the estimation error as a function of the random shift, as well as its distribution for a random shift, under various settings. While it is well known that the Monte Carlo estimator obeys a central limit theorem when $n \rightarrow \infty$, the randomized lattice rule does not, due to the strong dependence between the function evaluations. We show that for the simple case of one-dimensional integrands, the limiting error distribution is uniform over a bounded interval if the integrand is non-periodic, and has a square root form over a bounded interval if the integrand is periodic. We find that in higher dimensions, there is little hope to precisely characterize the limiting distribution in a useful way for computing confidence intervals in the general case. We nevertheless examine how this error behaves as a function of the random shift from different perspectives and on various examples. We also point out a situation where a classical central-limit theorem holds when the dimension goes to infinity, we provide guidelines on when the error distribution should not be too far from normal, and we examine how far from normal is the error distribution in examples inspired from real-life applications.
Without convexity the convergence of a descent algorithm can normally only be certified in the weak sense that every accumulation point of the sequence of iterates is critical. This does not at all correspond to what we observe in practice, where these optimization methods always converge to a single limit point, even though convergence may sometimes be slow.
Around 2006 it has been observed that convergence to a single limit can be proved for objective functions having certain analytic features. The property which is instrumental here is called the Lojasiewicz inequality, imported from analytic function theory. While this has been successfully applied to smooth functions, the case of non-smooth functions turns out more difficult. In this talk we obtain some progress for upper-C1 functions. Then we proceed to show that this is not just out of a theoretical sandpit, but has consequences for applications in several fields. We sketch an application in destructive testing of laminate materials.
As is well-known semidefinite relaxations of discrete optimization problems can yield excellent bounds on their solutions. We present three examples from our collaborative research. The first addresses the quadratic assignment problem and a formulation is developed which yields the strongest lower bounds known for larger dimensions. Utilizing the latest iterative SDP solver and ideas from verified computing a realistic problem from communications is solved for dimensions up to 512.
A strategy based on the Lovasz theta function is generalized to compute upper bounds on the spherical kissing number utilizing SDP relaxations. Multiple precision SDP solvers are needed and improvements on known results for all kissing numbers in dimensions up to 23 are obtained. Finally, generalizing ideas of Lex Schrijver improved upper bounds for general binary codes are obtained in many cases.
Brad Pitt's zombie-attack movie "World War Z" may not seem like a natural jumping-off point for a discussion of mathematics or science, but in fact it was a request I received to review that movie in "The Conversation" and the review I wrote that led me to be invited to give a public lecture on zombies and maths at the Academy of Science next week. This week's colloquium will largely be a preview of that talk, so should be generally accessible.
My premise is that movies and maths have something in common. Both enable a trait which seems to be more highly developed in humans than in any other species, with profound consequences: the desire and capacity to explore possibility-space.
The same mathematical models can let us playfully explore how an outbreak of zombie-ism might play out, or how an outbreak of an infectious disease like measles would spread, depending, in part, on what choices we make. Where a movie gives us deep insight into one possibility, mathematics enables us to explore, at all once, millions of scenarios, and see where the critical differences lie.
I will try to use mathematical models of zombie outbreak to discuss how mathematical modelling and mathematical ideas such as functions and phase transitions might enter the public consciousness in a positive way.
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
Nowadays huge amounts of personal data are regularly collected in all spheres of life, creating interesting research opportunities but also a risk to individual privacy. We consider the problem of protecting confidentiality of records used for statistical analysis, while preserving as much of the data utility as possible. Since OLAP cubes are often used to store data, we formulate a combinatorial problem that models a procedure to anonymize 2-dimensional OLAP cubes. In this talk we present a parameterised approach to this problem.
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
New questions regarding the reliability and verifiability of scientific findings are emerging as computational methods are being increasingly used in research. In this talk I will present a framework for incorporating computational research into the scientific method, namely standards for carrying out and disseminating research to facilitate reproducibility. I will present some recent empirical results on data and code publication; the pilot project http://ResearchCompendia.org for linking data and codes to published results and validating findings; and the "Reproducible Research Standard" for ensuring the distribution of legally usable data and code. If time permits, I will present preliminary work on assessing the reproducibility of published computational findings based on the 2012 ICERM workshop on Reproducibility in Computational and Experimental Mathematics report [1]. Some of this research is described in my forthcoming co-edited books "Implementing Reproducible Research" and "Privacy, Big Data, and the Public Good."
[1] D. H. Bailey, J. M. Borwein, Victoria Stodden "Set the Default to 'Open'," Notices of the AMS, June/July 2013.
This PhD so far has focussed on two distinct optimisation problems pertaining to public transport, as detailed below:
Within public transit systems, so-called flexible transport systems have great potential to of- fer increases in mobility and convenience and decreases in travel times and operating costs. One such service is the Demand Responsive Connector, which transports commuters from residential ad- dresses to transit hubs via a shuttle service, from where they continue their journey via a traditional timetabled service. We investigate various options for implementing a demand responsive connector and the associated vehicle scheduling problems. Previous work has only considered regional systems, where vehicles drop passengers off at a predetermined station -- we relax that condition and investigate the benefits of allowing alternative transit stations. An extensive computational study shows that the more flexible system offers cost advantages over regional systems, especially when transit services are frequent, or transit hubs are close together, without little impact on customer convenience.
A compliment to public transport systems is that of ad hoc ride sharing, where participants (either offering or requesting rides) are paired with participants wanting the reverse, by some central service provider. Although such schemes are currently in operation, the lack of certainty offered to riders (i.e. the risk of not finding a match, or a driver not turning up) discourages potential users. Critically, this can prevent the system from reaching a "critical mass" and becoming self sustaining. We are investigating the situation where the provider has access to a fleet of dedicated drivers, and may use these to service riders, especially when such a system is in its infancy. We investigate some of the critical pricing issues surrounding this problem, present some optimisation models and provide some computational results.
We show that ESO universal Horn logic (existential second logic where the first order part is a universal Horn formula) is insufficient to capture P, the class of problems decidable in polynomial time. This is true in the presence of a successor relation in the input vocabulary. We provide two proofs -- one based on reduced products of two structures, and another based on approximability theory (the second proof is under the assumption that P is not the same as NP). The difference between the results here and those in (Graedel 1991) is due to the fact that the expressions this talk deals with are at the "structure level", whereas the expressions in (Graedel 1991) are at the "machine level" since they encode machine computations -- a case of "Easier DONE than SAID".
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
I will describe the research I have been doing with Fran Aragon and others, using graphical methods to study the properties of real numbers. There will be very few formulas and more pictures and movies.
: In this final talk of the sequence we will sketch Blinovsky's recent proof of the conjecture: Whenever n is at least 4k, and A is a set of n numbers with sum 0, then there are at least (n-1) choose (k-1) subsets of size k which have non-negative sum. The nice aspect of the proof is the combination of hypergraph concepts with convex geometry arguments and a Berry-Esseen inequality for approximating the hypergeometric distribution. The not so nice aspect (which will be omitted in the talk) is the amount of very tedious algebraic manipulation that is necessary to verify the required estimates. There are slides for all four MMS talks here.
The TELL ME agent based model will simulate personal protective decisions such as vaccination or hand hygiene during an influenza epidemic. Such behaviour may be adopted in response to communication from health authorities, taking into account perceived influenza risk. The behaviour decisions are to be modelled with a combination of personal attitude, average local attitude, the local number of influenza cases and the case fatality rate. The model is intended to be used to understand the effects of choices about how to communicate with citizens about protecting themselves from epidemics. I will discuss the TELL ME model design, the cognitive theory supporting the design and some of the expected problems in building the simulation.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
In this talk I will present a general method of finding simple groups acting on trees. This process, beginning with any group $G$ acting on a tree, produces more groups known as the $k$-closures of $G$. I will use several examples to highlight the versatility of this method, and I will discuss the properties of the $k$-closures that allow us to find abstractly simple groups.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
This is joint work with Geoffrey Lee.
The set of permutations generated by a passing an ordered sequence through a stack of depth 2 followed by an infinite stack in series was shown to be finitely based by Elder in 2005. In this new work we obtain an algebraic generating function for this class, by showing it is in bijection with an unambiguous context-free grammar.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
In these two talks chapter I want to talk, both generally and personally, about the use of tools in the practice of modern research mathematics. To focus my attention I have decided to discuss the way I and my research group members have used tools primarily computational (visual, numeric and symbolic) during the past five years. When the tools are relatively accessible I shall exhibit details; when they are less accessible I settle for illustrations and discussion of process.
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself).
Over the past five years, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities - and the growing ease of programming - of modern multi-core computing environments. But, at least as much, it has been driven by paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
The idea of an almost automorphisms of a tree will be introduced as well as what we are calling quasi-regular trees. I will then outline what I have been doing in regard to the almost automorphisms of almost quasi-regular trees with two valencies and the challenges that come with using more valencies.
In these two talks chapter I want to talk, both generally and personally, about the use of tools in the practice of modern research mathematics. To focus my attention I have decided to discuss the way I and my research group members have used tools primarily computational (visual, numeric and symbolic) during the past five years. When the tools are relatively accessible I shall exhibit details; when they are less accessible I settle for illustrations and discussion of process.
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself).
Over the past five years, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities - and the growing ease of programming - of modern multi-core computing environments. But, at least as much, it has been driven by paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
The American mathematical research community experienced remarkable changes over the course of the three decades from 1920 to 1950. The first ten years witnessed the "corporatization" and "capitalization" of the American Mathematical Society, as mathematicians like Oswald Veblen and George Birkhoff worked to raise private, governmental, and foundation monies in support of research-level mathematics. The next decade, marked by the stock market crash and Depression, almost paradoxically witnessed the formation and building up of a number of strongly research-oriented departments across the nation at the same time that noted mathematical refugees were fleeing the ever-worsening political situation in Europe. Finally, the 1940s saw the mobilization of American research mathematicians in the war effort and their subsequent efforts to insure that pure mathematical research was supported as the Federal government began to open its coffers in the immediately postwar period. Ultimately, the story to be told here is a success story, but one of success in the face of many obstacles. At numerous points along the way, things could have turned out dramatically differently. This talk will explore those historical contingencies.
About the speaker:
Karen Parshall is Professor of History and Mathematics at the University of Virginia, where she has served on the faculty since 1988. Her research focuses primarily on the history of science and mathematics in America and in the history of 19th- and 20th-century algebra. In addition to exploring technical developments of algebra—the theory of algebras, group theory, algebraic invariant theory—she has worked on more thematic issues such as the development of national mathematical research communities (specifically in the United States and Great Britain) and the internationalization of mathematics in the nineteenth and twentieth centuries. Her most recent book (co-authored with Victor Katz), Taming the Unknown: A History of Algebra from Antiquity to the Early Twentieth Century, will be published by Princeton University Press in June 2014.
This talk is a practice talk for an invited talk I will soon give in Indonesia, in which I was asked to present on Education at a conference on Graph Theory.
In 1929 Alfred North Whitehead wrote: "The university imparts information, but it imparts it imaginatively. At least, this is the function it should perform for society. A university which fails in this respect has no reason for existence. This atmosphere of excitement, arising from imaginative consideration, transforms knowledge. A fact is no longer a bare fact: it is invested with all its possibilities."
In the light and inspiration of Whitehead's quote, I will discuss some aspects of the problem and challenge of mathematical education as we meet it in Universities today, with reference to some of the ways that combinatorics may be an ideal vehicle for sharing authentic mathematical experiences with diverse students.
We begin the talk with the story of Dido and the Brachistochrone problem. We show how these two problems leads to the two must fundamental problems of the calculus of variations. The Brachistochrone problem leads to the basic problem of calculus of variations and that leads to the Euler-Lagrange equation. We show the link between the Euler-Lagrange equations and the laws of classical mechanics.
We also discuss about the Legendre conditions and Jacobi conjugate points which leads to the sufficient conditions for weak local minimum points
.The Dido's problem leads to the problem of Lagrange in which Lagrange introduces his multiplier rule. We also speak a bit about the problem of Bolza and further also discuss about how the class of extremals can be enlarged and the issue of existence of solutions in calculus of variations, the Tonelli's direct methods and some more facts on the quest for multiplier rules.
Using gap functions to devise error bounds for some special classes of monotone variational inequality is a fruitful venture since it allows us to obtain error bounds for certain classes of convex optimization problem. It is to be noted that if we take a Hoffman type approach to obtain error bounds to the solution set of a convex programming problem it does not turn out to be fruitful and thus using the vehicle of variational inequality seems fundamental in this case. We begin the discussion by introducing several popular gap functions for variational inequalities like the Auslender gap function and the Fukushima's regularized gap function and how error bounds can be created out of them. We then also spent a brief time with gap functions for variational inequalities with set-valued maps which correspond to the non-smooth convex optimization problems. We then quickly shift our focus on the creating error bounds using the dual gap function which is possibly the only convex gap function known in the literature to the best of our knowledge. In fact this gap function was never used for creating error bounds. Error bounds can be used as stopping criteria and this the dual gap function can be used to solve the variational inequality and also be used to develop a stopping criteria. We present several recent research on error bounds using the dual gap function and also provide an application to quasiconvex optimization.
I will solve a variety of mathematical problems in Maple. These will come from geometry, number theory, analysis and discrete mathematics.
A companion book chapter is http://carma.newcastle.edu.au/jon/hhm.pdf.
The need for well-trained secondary mathematics teachers is well documented. In this talk we will discuss strategies we have developed at JCU to address the quality of graduating mathematics teachers. These strategies are broadly grouped as (i) having students develop a sense of how they learn mathematics and the skills they can work on to improve their learning of mathematics, and (ii) the need for specific mathematics content subjects for pre-service secondary mathematics teachers.
Our aim in this talk is to show that D-gap function can play a pivotal role in developing inexact descent methods to solve monotone variational inequality problem where the feasible set of the variational inequality is a closed convex set rather than just the non-negative orthant. We also focus on the issue of regularization of variational inequality. Freidlander and Tseng has shown in 2007 that by the regularizing the convex objective function by using another convex function which in practice is chosen correctly can make the solution of the problem simpler. Tseng and Freiedlander has provided a criteria for exact regularization of convex optimization problems. In this section we ask the question as to what extent one can extend the idea of exact regularization in the context of variational inequalities. We study this in this talk and we show the central role played by the dual gap function in this analysis.
In this talk we are going to discuss the importance of M-stationary conditions for a special class of one-stage stochastic mathematical programming problem with complementarity constraints (SMPCC, for short). M-stationarity concept is well known for deterministic MPCC problems. Now using the results of deterministic MPCC problems we can easily derive the M-stationarity for SMPCC problems under some well known constraint qualifications. It is well observed that under MPCC-linear independence constraint qualification we obtain strong stationarity conditions at a local minimum, which is a stronger notion than M-stationarity. Same result cab be derived for SMPCC problems under SMPCC-LICQ. Then the question that will arise is: What is the importance to study M-stationarity under the assumption of SMPCC-LICQ. To answer this question we have to discuss sample average approximation (SAA) method, which is a common technique to solve stochastic optimization problems. Here one has to discretize the underlying probability space and then using the strong Law of Large Numbers one has to approximate the expectation functionals. Now the main result of this discussion as follows: If we consider a sequence of M-type Fritz John points of the SAA problems then any accumulation point of this sequence will be an M-stationarity point under SMPCC-LICQ. But this kind of result, in general, does not hold for strong stationarity conditions.
It is axiomatic in mathematics research that all steps of an argument or proof are open to scrutiny. However, a proof based even in part on commercial software is hard to assess, because the source code---and sometimes even the algorithm used---may not be made available. There is the further problem that a reader of the proof may not be able to verify the author's work unless the reader has access to the same software.
For this reason open-source software systems have always enjoyed some use by mathematicians, but only recently have systems of sufficient power and depth become available which can compete with---and in some cases even surpass---commercial systems.
Most mathematicians and mathematics educators seem to gravitate to commercial systems partly because such systems are better marketed, but also in the view that they may enjoy some level of support. But this comes at the cost of initial purchase, plus annual licensing fees. The current state of tertiary funding in Australia means that for all but the very top tier of universities, the expense of such systems is harder to justify.
For educators, a problem is making the system available to students: it is known that students get the most use from a system when they have unrestricted access to it: at home as well as at their institution. Again, the use of an open-source system makes it trivial to provide access.
This talk aims to introduce several very powerful and mature systems: the computer algebra systems Sage, Maxima and Axiom; the numerical systems Octave and Scilab; and the assessment system WeBWorK (or as many of those as time permits). We will briefly describe these systems: their history, current status, usage, and comparison with commercial systems. We will also indicate ways in which anybody can be involved in their development. The presenter will describe his own experiences in using these software systems, and his students' attitudes to them.
Depending on audience interests and expertise, the talk might include looking at a couple of applications: geometry and Gr\"obner bases, derivation of Runge-Kutta explicit formulas, cryptography involving elliptic curves and finite fields, or digital image processing.
The talk will not assume any particular mathematics beyond undergraduate material or material with which the audience is comfortable, and will be as polemical as the audience allows.
The additive or linearized polynomials were introduced by Ore in 1933 as an analogy over finite fields to his theory of difference and difference equations over function fields. The additive polynomials over a finite field $F=GF(q)$, where $q=p^e$ for some prime $p$, are those of the form
$f = f_0 x + f_1 x^p + f_2 x^{p^2} + ... + f_m x^{p^m}$ in $F[x]$
They form a non-commutative left-euclidean principal ideal domain under the usual addition and functional composition, and possess a rich structure in both their decomposition structures and root geometries. Additive polynomials have been employed in number theory and algebraic geometry, and applied to constructing error-correcting codes and cryptographic protocols. In this talk we will present fast algorithms for decomposing and factoring additive polynomials, and also for counting the number of decompositions with particular degree sequences.
Algebraically, we show how to reduce the problem of decomposing additive polynomials to decomposing a related associative algebra, the eigenring. We give computationally efficient versions of the Jordan-Holder and Krull-Schmidt theorems in this context to describe all possible factorization. Geometrically, we show how to compute a representation of the Frobenius operator on the space of roots, and show how its Jordan form can be used to count the number of decompositions. We also describe an inverse theory, from which we can construct and count the number of additive polynomials with specified factorization patterns.
Some of this is joint work with Joachim von zur Gathen (Bonn) and Konstantin-Ziegler (Bonn).
I am refereeing a manuscript in which a new construction for producing graphs from a group is given. There are some surprising aspects of this new method and that is what I shall discuss.
The Australian Mathematical Sciences Student Conference is held annually for Australian postgraduate and honours students of any mathematical science. The conference brings students together, gives an opportunity for presentation of work, facilitates dialogue, and encourages collaboration, within a friendly and informal atmosphere.
Visit the conference website for more details.
I will survey my career both mathematically and personally offering advice and opinions, which should probably be taken with so many grains of salt that it makes you nauseous. (Note: Please bring with you a sense of humour and all of your preconceived notions of how your life will turn out. It will be more fun for everyone that way.)
What do the three elements of the title have in common is the utility of using graph searching as a model. In this talk I shall discuss the relatively brief history of graph searching, several models currently being employed, several significant results, unsolved conjectures, and the vast expanse of unexplored territory.
I will talk about the geometric properties of conic problems and their interplay with ill-posedness and the performance of numerical methods. This includes some new results on the facial structure of general convex cones, preconditioning of feasibility problems and characterisations of ill-posed systems.
Many biological environments, both intracellular and extracellular, are often crowded by large molecules or inert objects which can impede the motion of cells and molecules. It is therefore essential for us to develop appropriate mathematical tools which can reliably predict and quantify collective motion through crowded environments.
Transport through crowded environments is often classified as anomalous, rather than classical, Fickian diffusion. Over the last 30 years many studies have sought to describe such transport processes using either a continuous time random walk or fractional order differential equation. For both these models the transport is characterized by a parameter $\alpha$, where $\alpha=1$ is associated with Fickian diffusion and $\alpha<1$ is associated with anomalous subdiffusion. In this presentation we will consider the motion of a single agent migrating through a crowded environment that is populated by impenetrable, immobile obstacles and we estimate $\alpha$ using mean squared displacement data. These results will be compared with computer simulations mimicking the transport of a population of such agents through a similar crowded environment and we match averaged agent density profiles to the solution of a related fractional order differential equation to obtain an alternative estimate of $\alpha$. I will examine the relationship between our estimate of $\alpha$ and the properties of the obstacle field for both a single agent and a population of agents; in both cases $\alpha$ decreases as the obstacle density increases, and that the rate of decrease is greater for smaller obstacles. These very simple computer simulations suggests that it may be inappropriate to model transport through a crowded environment using widely reported approaches including power laws to describe the mean squared displacement and fractional order differential equations to represent the averaged agent density profiles.
More details can be found in Ellery, Simpson, McCue and Baker (2014) The Journal of Chemical Physics, 140, 054108.
The talk will provide a brief overview of the findings of two completed research projects and one ongoing project related to the knowledge and beliefs of teachers of school mathematics. It will consider some existing frameworks for types of teacher knowledge, and the place of teachers’ beliefs and confidence in relation to these, as well as touching on how a broad construct of teacher knowledge might develop.
We shall finish our look at two-sided group graphs.
The relentless advance of computer technology, a gift of Moore's Law, and the data deluge available via the Internet and other sources, has been a gift to both scientific research and business/industry. Researchers in many fields are hard at work exploiting this data. The discipline of "machine learning," for instance, attempts to automatically classify, interpret and find patterns in big data. It has applications as diverse as supernova astronomy, protein molecule analysis, cybersecurity, medicine and finance. However, with this opportunity comes the danger of "statistical overfitting," namely attempting to find patterns in data beyond prudent limits, thus producing results that are statistically meaningless.
The problem of statistical overfitting has recently been highlighted in mathematical finance. A just-published paper by the present author, Jonathan Borwein, Marcos Lopez de Prado and Jim Zhu, entitled "Pseudo-Mathematics and Financial Charlatanism," draws into question the present practice of using historical stock market data to "backtest" a new proposed investment strategy or exchange-traded fund. We demonstrate that in fact it is very easy to overfit stock market data, given powerful computer technology available, and, further, without disclosure of how many variations were tried in the design of a proposed investment strategy, it is impossible for potential investors to know if the strategy has been overfit. Hence, many published backtests are probably invalid, and this may explain why so many proposed investment strategies, which look great on paper, later fall flat when actually deployed.
In general, we argue that not only do those who directly deal with "big data" need to be better aware of the methodological and statistical pitfalls of analyzing this data, but those who observe these problems of this sort arising in their profession need to be more vocal about them. Otherwise, to quote our "Pseudo-Mathematics" paper, "Our silence is consent, making us accomplices in these abuses."
(see PDF)
Lagrange multiplier method is fundamental in dealing with constrained optimization problems and is also related to many other important results.
In these two talks we first survey several different ideas in proving the Lagrange multiplier rule and then concentrate on the variational approach.
We will first discuss the idea, a variational proof the Lagrange multiplier rule in the convex case and then consider the general case and relationship with other results.
These talks are continuation of the e-mail discussions with Professor Jon Borwein and are very informal.
Reproducibility is emerging as a major issue for highly parallel computing, in much the same way (and for many of the same reasons) that it is emerging as an issue in other fields of science, technology and medicine, namely the growing numbers of cases where other researchers cannot reproduce published results. This talk will summarize a number of these issues, including the need to carefully document computational experiments, the growing concern over numerical reproducibility and, once again, the need for responsible reporting of performance. Have we learned the lessons of history?
My talk will be on the projection/reflection methods and the application of tools from convex and variational analysis to optimisation problems, and I will talk about my thesis problem which focuses on the following:
We consider convexity conditions ensuring the monotonicity of right and left Riemann sums of a function $f:[0,1]\rightarrow \mathbb{R}$; applying our results in particular to functions such as
$f(x) =1/\left(1+x^2\right)$.Lagrange multiplier method is fundamental in dealing with constrained optimization problems and is also related to many other important results.
In these two talks we first survey several different ideas in proving the Lagrange multiplier rule and then concentrate on the variational approach.
We will first discuss the idea, a variational proof the Lagrange multiplier rule in the convex case and then consider the general case and relationship with other results.
These talks are continuation of the e-mail discussions with Professor Jon Borwein and are very informal.
Usually, when we want to study permutation groups, we look first at the primitive permutation groups (transitive groups in which point stabilizers are maximal); in the finite case these groups are the basic building blocks from which all finite permutation groups are comprised. Thanks to the seminal O'Nan—Scott Theorem and the Classification of the Finite Simple Groups, the structure of finite primitive permutation groups is broadly known.
In this talk I'll describe a new theorem of mine which extends the O'Nan—Scott Theorem to a classification of all primitive permutation groups with finite point stabilizers. This theorem describes the structure of these groups in terms of finitely generated simple groups.
The eighth edition of the conference series GAGTA (Geometric and Asymptotic Group Theory with Applications) will be held in Newcastle, Australia July 21-25 (Mon-Fri) 2014.
GAGTA conferences are devoted to the study of a variety of areas in geometric and combinatorial group theory, including asymptotic and probabilistic methods, as well as algorithmic and computational topics involving groups. In particular, areas of interest include group actions, isoperimetric functions, growth, asymptotic invariants, random walks, algebraic geometry over groups, algorithmic problems and their complexity, generic properties and generic complexity, and applications to non-commutative cryptography.
Visit the conference web sitefor more information.
A vast amount of natural processes can be modelled by partial differential equations involving diffusion operators. The Navier-Stokes equations of fluid dynamics is one of the most popular of such models, but many other equations describing flows involve diffusion processes. These equations are often non-linear and coupled, and theoretical analysis can only provided limited information on the qualitative behaviours of their solutions. Numerical analysis is then used to obtain a prediction of the fluid's behaviour.
In many circumstances, the numerical methods used to approximate the models must satisfy engineering or computational constraints. For examples, in underground flows in porous media (involved in oil recovery, carbon storage or hydrogeology), the diffusions properties of the medium vary a lot between geological layers, and can be strongly skewed in one direction. Moreover, the available meshes used to discretise the equations may be quite irregular. The sheer size of the domain of study (a few kilometres wide) also calls for methods that can be easily parallelised and give good and stable results on relatively large grids. These constraints make the construction and study of numerical methods for diffusion models very challenging.
In the first part of this talk, I will present some numerical schemes, developed in the last 10 years and designed to discretise diffusion equations as encountered in reservoir engineering, with all the associated constraints. In the second part, I will focus on mathematical tools and techniques constructed to analyse the convergence of numerical schemes under realistic hypotheses (i.e. without assuming non-physical smoothness on the data or the solutions). These techniques are based on the adaptation to the discrete setting of functional analysis results used to study the continuous equations.
Colin Reid will present some thoughts on limits of contraction groups.
I shall be describing a largely unexplored concept in graph theory which is, I believe, an ideal thesis topic. I shall be presenting this at the CIMPA workshop in Laos in December.
Mathematics can often seen almost too good to be true. This sense that mathematics is marvellous enlivens learning and stimulates research but we tend to let remarkable things pass without remark after we become familiar with them. The miracles of Pythagorean triples and eigenvalues will be highlights of this talk.
The talk will include some ideas of what could be blending into our teaching program.
We give some background to the metric basis problem (or resolving set) of a graph. We discuss various resolving sets with different conditions forced on them. We mainly stress the ideas of strong metric basis and partition dimension of graphs. We give the necessary literature background on these concepts and some preliminary results. We present our new results obtained so far as part of the research during my candidature. We also list the research problems I propose to study during the remainder of my PhD candidature and we present a tentative timeline of my research activities.
This week I shall start a series of talks on basic pursuit-evasion in graphs (frequently called cops and robber in the literature). We shall do some topological graph theory leading to an intriguing conjecture, and we'll look at a characterization problem.
The Diophantine Problem in group theory can be stated as: is it algorithmically decidable whether an equation whose coefficients are elements of a given group has at least one solution in that group?
The talk will be a survey on this topic, with emphasis on what is known about solving equations in free groups. I will also present some of the algebraic geometry over groups developed in the last 20 years, and the connections to logic and geometry. I will conclude with results concerning the asymptotic behavior of satisfiable homogeneous equations in surface groups.
Jon Borwein will discuss CARMA's new "Risk and finance study group". Please come and learn about the opportunities. See also http://www.financial-math.org/ and http://www.financial-math.org/blog/.
This week I shall continue the discussion of searching graphs.
We present a PSPACE-algorithm to compute a finite graph of exponential size that describes the set of all solutions of equations in free groups with rational constraints. This result became possible due to the recently invented recompression technique of Artur Jez. We show that it is decidable in PSPACE whenever the set of all solutions is finite. If the set of all solutions is finite, then the length of a longest solution is at most doubly exponential.
This talk is based on a joint paper with Artur Jez and Wojciech Plandowski (arXiv:1405.5133 and LNCS 2014, Proceedings CSR 2014, Moscow, June 7 -- 11, 2014).
Ben will attempt to articulate what he has been meaning to work on. That is, choosing representatives with smallest 1-norm in an effort to find a nice bound on the number of vertices on level 1 of the corresponding rooted almost quasi-regular tree with 1 defect, and other ideas on choosing good representatives.
Brian Alspach will continue discussing searching graphs embedded on the torus.
The restricted product over $X$ of copies of the $p$-adic numbers $\mathbb{Q}_p$, denoted $\mathbb{Q}_p(X)$, is self-dual and is the natural $p$-adic analogue of Hilbert space. The additive group of this space is locally compact and the continuous endomorphisms of the group are precisely the continuous linear operators on $\mathbb{Q}_p(X)$.
Attempts to develop a spectral theory for continuous linear operators on $\mathbb{Q}_p(X)$ will be described at an elementary level. The Berkovich spectral theory over non-Archimedean fields will be summarised and the spectrum of the linear operator $T$ compared with the scale of $T$ as an endomorphism of $(\mathbb{Q}_p(X),+)$.
The original motivation for this work, which is joint with Andreas Thom (Leipzig), will also be briefly discussed. A certain result that holds for representations of any group on a Hilbert space, proved by operator theoretic methods, can only be proved for representations of sofic groups on $\mathbb{Q}_p(X)$ and it is thought that the difficulty might lie with the lack of understanding of linear operators on $\mathbb{Q}_p(X)$ rather than with non-sofic groups.
This forum is a follow-on from the seminar that Professor Willis gave three weeks prior, on maths that seems too good to be true; and his ideas for incorporating the surprisingly and enlivening into what and how we teach: he gave as exemplars the miracles of Pythagoreans triples and eigenvalues. A question raised in the discussion at that seminar was if/how might we use assessment to encourage the kinds of learning we would like. This forum will be an opportunity to further that conversation.
Jeff, Andrew and Massoud have each kindly agreed to give us 5 minute presentations relating to the latter year maths courses that they have recently been teaching, to get our forum started. Jeff may speak on his developments in his new course on Fourier methods, Andrew will talk about some of the innovations that were introduced into Topology in the last few offerings which he has been using and further developing, and Massoud has a range of OR courses he might speak about.
Everyone is encouraged to share examples of their own practice or ideas that they have that may be of interest to others.
A locating-total dominating set (LTDS) in a connected graph G is a total dominating set $S$ of $G$ such that for every two vertices $u$ and $v$ in $V(G)-S$, $N(u) \cap S \neq N(v) \cap S$. Determining the minimum cardinality of a locating-total dominating set is called as the locating-total dominating problem and it is denoted as $\gamma_t^l (G)$. We have improved the lower bound obtained by M.A.Henning and N.D.Rad [1]. We have also proved that the bound obtained is sharp for some special families of regular graphs.
[1] M. A. Henning and N. J. Rad, Locating-total dominations in graphs, Discrete Applied Mathematics, 160(2012), 1986-1993.
8:30 am | Registration, coffee and light breakfast |
9:00 am | Director's Welcome |
9:30 am | Session: "Research at CARMA" |
10:30 am | Morning tea |
11:00 am | Session: "Academic Liasing" |
11:30 am | Session: "Education/Outreach Activities" |
12:30 pm | Lunch |
2:00 pm | Session: "Future of Research at the University" |
2:30 pm | Session: "Future Planning for CARMA" |
3:30 pm | Afternoon tea |
4:00 pm | Session: Talks by members (to 5:20 pm) |
6:00 pm | Dinner |
In this talk we consider economic Model Predictive Control (MPC) schemes. "Economic" means that the MPC stage cost models economic considerations (like maximal yield, minimal energy consumption...) rather than merely penalizing the distance to a pre-computed steady state or reference trajectory. In order to keep implementation and design simple, we consider schemes without terminal constraints and costs.
In the first (longer) part of the talk, we summarize recent results on the performance and stability properties of such schemes for nonlinear discrete time systems. Particularly, we present conditions under which one can guarantee practical asymptotic stability of the optimal steady state as well as approximately optimal averaged and transient performance. Here, dissipativity of the underlying optimal control problems and the turnpike property are shown to play an important role (this part is based on joint work with Tobias Damm, Marleen Stieler and Karl Worthmann).
In the second (shorter) part of the talk we present an application of an economic MPC scheme to a Smart Grid control problem (based on joint work with Philipp Braun, Christopher Kellett, Steven Weller and Karl Worthmann). While economic MPC shows good results for this control problem in numerical simulations, several aspects of this application are not covered by the available theory. This is explained in the last part of the talk, along with some suggestions on how to overcome this gap.
Classical umbral calculus was introduced by Blissard in the 1860's and later studied by E. T. Bell and Rota. It is a symbolic computation method that is particularly efficient for proving identities involving elementary special functions such as Bernoulli or Hermite polynomials. I will show the link between this technique and moment representation, and provide examples of its application.
This is the first in a series of lectures on this fascinating group.
If you’re enrolled in a BMath or Combined Maths degree or have Maths or Stats as a co-major, you’re invited to the B Math Party.
Come along for free food and soft drinks, meet fellow students and talk to staff about courses. Discover opportunities for summer research, Honours, Higher Degrees and scholarships.
The topological and measure structures carried by locally compact groups make them precisely the class of groups to which the methods of harmonic analysis extend. These methods involve study of spaces of real- or complex-valued functions on the group and general theorems from topology guarantee that these spaces are sufficiently large. When analysing particular groups however, particular functions deriving from the structure of the group are at hand. The identity function in the cases of $(\mathbb{R},+)$ and $(\mathbb{Z},+)$ are the most obvious examples, and coordinate functions on matrix groups and growth functions on finitely generated discrete groups are only slightly less obvious.
In the case of totally disconnected groups, compact open subgroups are essential structural features that give rise to positive integer-valued functions on the group. The set of values of $p$ for which the reciprocals of these functions belong to $L^p$ is related to the structure of the group and, when they do, the $L^p$-norm is a type of $\zeta$-function of $p$. This is joint work with Thomas Weigel of Milan.
This Thursday, sees a return to graph searching in the discrete mathematics instructional seminar. I’ll be looking at characterization results.
More than 120 years after their introduction, Lyapunov's so-called First and Second Methods remain the most widely used tools for stability analysis of nonlinear systems. Loosely speaking, the Second Method states that if one can find an appropriate Lyapunov function then the system has some stability property. A particular strength of this approach is that one need not know solutions of the system in order to make definitive statements about stability properties. The main drawback of the Second Method is the need to find a Lyapunov function, which is frequently a difficult task.
Converse Lyapunov Theorems answer the question: given a particular stability property, can one always (in principle) find an appropriate Lyapunov function? In the first installment of this two-part talk, we will survey the history of the field and describe several such Converse Lyapunov Theorems for both continuous and discrete-time systems. In the second instalment we will discuss constructive techniques for numerically computing Lyapunov functions.
In 1976, Ribe showed that if two Banach spaces are uniformly homeomorphic, then their finite dimensional subspaces are similar in some sense. This suggests that properties of Banach spaces which depend only on finitely many vectors should have a purely metric characterization. We will shortly discuss the history of the Ribe program, as well as some recent developments.
In particular:
It is known that the function s defined on an ordering of the 4^m monomial basis matrices of the real representation of the Clifford algebra Cl(m, m), where s(A) = 0 if A is symmetric, s(A) = 1 if A is skew, is a bent function. It is perhaps less well known that the function t, where t(A) = 0 if A is diagonal or skew, t(A) = 1 otherwise, is also a bent function, with the same parameters as s. The talk will describe these functions and their relation to Hadamard difference sets and strongly regular graphs.
The talk was originally presented at ADTHM 2014 in Lethbridge this year.
I will survey some recent and not-so-recent results surrounding the areas of Diophantine approximation and Mahler's method related to variations of the Chomsky-Schützenberger hierarchy.
Third lecture: metric properties.
More than 120 years after their introduction, Lyapunov's so-called First and Second Methods remain the most widely used tools for stability analysis of nonlinear systems. Loosely speaking, the Second Method states that if one can find an appropriate Lyapunov function then the system has some stability property. A particular strength of this approach is that one need not know solutions of the system in order to make definitive statements about stability properties. The main drawback of the Second Method is the need to find a Lyapunov function, which is frequently a difficult task.
Converse Lyapunov Theorems answer the question: given a particular stability property, can one always (in principle) find an appropriate Lyapunov function? In the first installment of this two-part talk, we will survey the history of the field and describe several such Converse Lyapunov Theorems for both continuous and discrete-time systems. In the second instalment we will discuss constructive techniques for numerically computing Lyapunov functions.
This week I shall finish my discussion about searching graphs by looking at the recent paper by Clarke and MacGillavray that characterizes graphs that are k-searchable.
Optimization problems involving polynomial functions are of great importance in applied mathematics and engineering, and they are intrinsically hard problems. They arise in important engineering applications such as the sensor network localization problem, and provide a rich and fruitful interaction between algebraic-geometric concepts and modern convex programming (semi-definite programming). In this talk, we will discuss some recent progress of the polynomial (semi-algebraic) optimization with a focus on the intrinsic link between the polynomial structure and the hidden convexity structure. The talk will be divided into two parts. In the first part, we will describe the key results in this new area, highlighting the geometric and conceptual aspects as well as recent work on global optimality theory, algorithms and applications. In the second part, we will explain how the semi-algebraic structure helps us to analyze some important and classical algorithms in optimization such as alternating projection algorithm, proximal point algorithm and Douglas-Rachford algorithm (if time is permitted).
One of the key components in the earth’s climate is the formation and melting of sea ice. Currently, we struggle to model correctly this process. One possible explanation for this shortcoming is that ocean waves play a key role and that their effect needs to be include in climate models. I will describe a series of recent experiments which seem to validate this hypothesis and discuss attempts my myself and others to model wave-ice interaction.
We introduce a subfamily of enlargements of a maximally monotone operator $T$. Our definition is inspired by a 1988 publication of Fitzpatrick. These enlargements are elements of the family of enlargements $\mathbb{E}(T)$ introduced by Svaiter in 2000. These new enlargements share with the $\epsilon$-subdifferential a special additivity property, and hence they can be seen as structurally closer to the $\epsilon$-subdifferential. For the case $T=\nabla f$, we prove that some members of the subfamily are smaller than the $\epsilon$-subdifferential enlargement. In this case, we construct a specific enlargement which coincides with the $\epsilon$-subdifferential.
Joint work with Juan Enrique Martínez Legaz, Mahboubeh Rezaei, and Michel Théra.
We discuss the genesis of symbolic computation, its deployment into computer algebra systems, and the applications of these systems in the modern era.
We will pay special attention to polynomial system solvers and highlight the problems that arise when considering non-linear problems. For instance, forgetting about actually solving, how does one even represent infinite solution sets?
The completion with respect to the degree valuation of the field of rational functions over a finite field is often a fruitful analogue to consider when one would like to test ideas, methods and conjectures in Diophantine approximation for the real numbers. In many respects, this setting behaves very similarly to the real numbers, and in particular the metric theory of Diophantine approximation in this setup is well-developed and and in some respects more is known to be true in this setup than in the real numbers. However, natural analogues of other classical theorems in Diophantine approximation fail spectacularly in positive characteristic. In this talk, I will introduce the topic and give old and new results underpinning the similarities and differences of the theories of Diophantine approximation in positive characteristic and in characteristic zero.
Self-avoiding walks are a widely studied model of polymers, which are defined as walks on a lattice where each successive step visits a neighbouring site, provided the site has not already been visited. Despite the apparent simplicity of the model, it has been of much interest to statistical mechanicians and probabilists for over 60 years, and many important questions about it remain open.
One of the most powerful methods to study self-avoiding walks is Monte Carlo simulation. I'll give an overview of the historical developments in this field, and will explain what ingredients are needed for a good Monte Carlo algorithm. I'll then describe how recent progress has allowed for the efficient simulation of truly long walks with many millions of steps. Finally, I'll discuss whether lessons we've learned from simulating self-avoiding walks may be applicable to a wide range of Markov chain Monte Carlo simulations.
We first introduce the notations of pattern sequence, which is defined by the number of (possibly) overlapping occurrences of a given word in the $\langle q,r\rangle$-numeration system. After surveying several properties of pattern sequence, we will give necessary and sufficient criteria for the algebraic independence of their generating functions. As applications, we deduce the linear relations between pattern sequences.
The proofs of the theorem and the corollaries are based on Mahler's method.
Mixed Littlewood conjecture proposed by de Mathan and Teulie in 2004 states that for every real number $x$ one has $\liminf q * |q|_D * ||qx|| = 0,$ where $|q|_D$ is a so called pseudo norm which generalises the standard p-adic norm. In the talk we'll consider the set mad of potential counterexamples to this conjecture. Thanks to the results of Einsiedler and Kleinbock we already know that the Haudorff dimension of mad is zero, so this set is very tiny. During the talk we'll see that the continued fraction expansion of every element in mad should satisfy some quite restrictive conditions. As one of them we'll see that for these expansions, considered as infinite words, the complexity function can neither grow too fast nor too slow.
Tensor trains are a new class of functions which are thought to have some potential to deal with high-dimensional problems. While connected with algebraic geometry the main concepts used are rank-k matrix factorisations. In this talk I will review some basic properties of tensor trains. In particular I will consider algorithms for the solution of linear systems Ax=0. This talk is related to research in progress with Jochen Garcke (Uni Bonn and Fraunhofer Institute) on the solution of the chemical master equation. This talk assumes a basic background in matrix algebra. No background in algebraic geometry is required.
Supervisor: Thomas Kalinowski
Supervisor: Thomas Kalinowski
Supervisor: Brailey Sims
Supervisor: Brian Alspach
Multi-objective optimisation is one of the earliest fields of study in operations research. In fact, Francis Edgeworth (1845--1926) and Vilfredo Pareto (1848--1923) laid the foundations of this field of study over one hundred years ago. Many real world-problems involve multiple objectives. Due to conflict between objectives, finding a feasible solution that simultaneously optimises all objectives is usually impossible. Consequently, in practice, decision makers want to understand the trade off between objectives before choosing suitable solution. Thus, generating many or all efficient solutions, i.e., solutions in which it is impossible to improve the value of one objective without a deterioration in the value of at least one other objective, is the primary goal in multi-objective optimisation. In this talk, I will focus on Multi-objective Integer Programs (MOIPs) and explain briefly some new efficient algorithms that I have developed since starting my PhD to solve MOIPs. I also explain some links between the ideas of multi-objective integer programming and other fields of study such as game theory.
The Mathematical Sciences Institute will host a three day workshop on more effective use of visualization in mathematics, physics, and statistics, from the perspectives of education, research and outreach. This is the second EViMS meeting, following the highly successful one held in Newcastle in November 2012. Our aim for the workshop is to help mathematical scientists understand the opportunities, risks and benefits of visualization, in research and education, in a world where visual content and new methods are becoming ubiquitous.
Visit the conference website for more information.
(Groups & Dynamics Special Session)
(Maths Education Special Session)
(Operator Algebra/ Functional Analysis Special Session)
(Computational Mathematics Special Session)
In this seminar I will talk on decomposing sequences into maximal palindrome factors and its applications in hairpin analysis of viruses like HIV or TB.
We apply the piecewise constant, discontinuous Galerkin method to discretize a fractional diffusion equation with respect to time. Using Laplace transform techniques, we show that the method is first order accurate at the $n$th time level~$t_n$, but the error bound includes a factor~$t_n^{-1}$ if we assume no smoothness of the initial data. We also show that for smoother initial data the growth in the error bound for decreasing time is milder, and in some cases absent altogether. Our error bounds generalize known results for the classical heat equation and are illustrated using a model 1D problem.