I will explain what groups are and give some examples and applications.
This talk is devoted to three basic forms of the inverse function theorem. The classical inverse function theorem identify a smooth single-valued localization of the inverse on the condition of nonsingularity of the Jacobian.
A diophantine m-tuple is a set of m-positive integers {a_1, . . . , a_m} such that the product of any two of them plus 1 is a square. For example, {1, 3, 8, 120} is a Diophantine quadruple found by Fermat. It is known that there are infinitely many such examples with m = 4 and none with m = 6. No example is known with m = 5 but if there exist, then there are only finitely many such. In my talk, I will survey what is known about this problem, as well as its variations, where one replaces the ring of integers by the ring of integers in some finite extension of Q, or by the field of rational numbers, or one looks at a variant of this problem in the ring of polynomials with coefficients in a field of characteristic zero, or when one replaces the squares by perfect powers of a larger exponent, or by members of some other interesting sequence like the sequence of Fibonacci numbers and so on.
An order picking system in a distribution center (DC) owned by Pep Stores Ltd. (PEP) the largest single brand retailer in South Africa is investigated. Twelve independent unidirectional picking lines situated in the center of the DC are used to process all piece picking. Each picking line consists of a number of locations situated in a cyclical formation around a central conveyor belt and are serviced by multiple pickers walking in a clockwise direction.
On a daily planning level three sequential decisions tiers exist and are described as:
These sub-problems are too complex to solve together and are addressed independently and in reverse sequence using mathematical programming and heuristic techniques. It is shown that the total walking distances of pickers may be significantly reduced when solving sub-problems 1 and 3 and that there is no significant impact when solving sub-problem 2. Moreover, by introducing additional work balance and small carton minimisation objectives into sub-problem 1 better trade-offs between objectives are achieved when compared to the current practice.
In this presentation we address the issues and challenges for Future of Education and how Maplesoft is committed to offers Tools such as Möbius™ to handle these challenges. Möbius is a comprehensive online courseware environment that focuses on science, technology, engineering, and mathematics (STEM). It is built on the notion that people learn by doing. With Möbius, your students can explore important concepts using engaging, interactive applications, visualize problems and solutions, and test their understanding by answering questions that are graded instantly. Throughout the entire lesson, students remain actively engaged with the material and receive constant feedback that solidifies their understanding.
When you use Möbiusto develop and deliver your online offerings, you remain in full control of your content and the learning experience.
For more information on Möbiusplease visit http://maplesoft.com/products/Mobius/.
The Degree/Diameter Problem for graphs has its motivation in the efficient design of interconnection networks. It seeks to find the maximum possible order of a graph with a given (maximum) degree and diameter. It is known that graphs attaining the maximum possible value (the Moore bound) are extremely rare, but much activity is focussed on finding new examples of graphs or families of graph with orders approaching the bound as closely as possible. This problem was first mention in 1964 and has its motivation in the efficient design of interconnection networks. A lot of great mathematician studied this problem and obtained some results but there still remain a lot of unsolved problems about this subject. Our regretted professor Mirka Miller has given a great expansion to this problems and a lot of new results were given by her and her students. One of the problem she was recently interested in, was the Degree/Diameter problem for mixed graphs i.e. graphs in which we allow undirected edges and arcs (directed edges).
Some new result about the upper bound of this Moore mixed graph has been obtained in 2015. So this talk consists on giving the main known results about those graphs.
We will review the (now classical) scheme of basic ($q$-) hypergeometric orthogonal polynomials. It contains more than twenty families; for each family there exists at least one positive weight with respect to which the polynomials are orthogonal provided the parameter $q$ is real and lies between 0 and 1. In the talk we will describe how to reduce the scheme allowing the parameters in the families to be complex. The construction leads to new orthogonality properties or to generalization of known ones to the complex plane.
Model sets, which go back to Yves Meyer (1972), are a versatile class of structures with amazing harmonic properties. They are particularly relevant for mathematical quasicrystals. More recently, also systems such as the square-free integers or the visible lattice points have been studied in this context, leading to the theory of weak model sets. This talk will review some of the development, and introduce some of the concepts in the field.
Peter Frankl's union-closed sets conjecture, which dates back to (at least) 1979, states that for every finite family of sets which is closed under taking unions there is an element contained in at least half of the sets. Despite considerable efforts the general conjecture is still open, and the latest polymath project is an attempt to make progress. I will give an overview of equivalent variants of the conjecture and discuss known special cases and partial results.
Start labelling the vertices of the square grid with 0's and 1's with the condition that any pair of neighbouring vertices cannot both be labelled 1. If one considers the 1's to be the centres of small squares (rotated 45 degrees) then one has a picture of square-particles that cannot overlap.
This problem of "hard-squares" appears in different areas of mathematics - for example it has appeared separately as a lattice gas in statistical mechanics, as independent sets in combinatorics and as the golden-mean shift in symbolic dynamics. A core question in this model is to quantify the number of legal configurations - the entropy. In this talk I will discuss the what is known about the entropy and describe our recent work finding rigorous and precise bounds for hard-squares and related problems.
This is work together with Yao-ban Chan.
I’m going to give a summary of research projects I have been involved in over my study leave; they represent a shared theme: retailing. The projects which I’m going to talk about are:
We report upon insights gained into the BMath through: "Conversations: BMath Experiences". This is a project that was initiated by the BMath Convener in collaboration with NUMERIC. We invited first year BMath students to semi-structured conversations around their experiences in their degree. We will be sharing general insights that we have gained into the BMath through the project.
Speakers: Mike Meylan, Andrew Kepert, Liz Stojanovski and Judy-anne Osborn.
I'll continue to discuss Frankl's union-closed sets conjecture. In particular I'll present two possible approaches (local configurations and averaging) and indicate obstacles to proving the general case using these methods.
How confident are you in your choice? Such a simple but important question for people to answer. Yet, capturing how people answer this question has proven challenging for mathematical models of cognition. Part of the challenge has been that these models assume confidence is a static variable based on the same information used to make a decision. In the first part of my talk, I will review my dynamic theory of confidence, two-stage dynamic signal detection theory (2DSD). 2DSD is based on the premise that the same evidence accumulation process that underlies choice is used to make confidence judgments, but that post-decisional processing of information contributes to confidence judgments. Thus, 2DSD correctly predicts that the resolution of confidence judgments, or their ability to discriminate between correct and incorrect choices, increases over time. However, I have also found that the dynamics of confidence is driven by other factors including the very act of making a choice. In the second of the part of the talk, I will show how 2DSD and other models derived from classical stochastic theories are unable to parsimoniously account for this stable interference effect of choice. In contrast, quantum random walk modes of evidence accumulation account for this property by treating judgments and decisions as a measurement process by which a definite state is created from an indefinite state. In summary, I hope to show how better understanding the dynamic nature of confidence can provide new methods for improving the accuracy of people’s confidence, but also reveal new properties of the deliberation process including perhaps the quantum nature of evidence accumulation.
In this talk, I will outline my interest in, and results towards, the Erdős Discrepancy Problem (EDP). I came about this problem as a PhD student sometime around 2007. At the time, many of the best number theorists in the world thought that this problem would outlast the Riemann hypothesis. I had run into some interesting examples of some structured sequences with very small growth, and in some of my early talks, I outlined a way one might be able to attack the EDP. As it turns out, the solution reflected quite a bit of what I had guessed. And I say 'guessed' because I was so young and naïve that my guess was nowhere near informed enough to actually have the experience behind it to call it a conjecture. In this talk, I will go into what I was thinking and provide proof sketches of what turn out to be the extremal examples of EDP.
The discrepancy of a graph measures how evenly its edges are distributed. I will talk about a lower bound which was proved by Bollobas and Scott in 2006, and extends older results by Erdos, Goldberg, Pach and Spencer. The proof provides a nice illustration of the probabilistic method in combinatorics. If time allows I will outline how this stuff can be used to prove something about convex hulls of bilinear functions.
Mapping class groups are groups which arise naturally from homeomorphisms of surfaces. They are ubiquitous: from hyperbolic geometry, to combinatorial group theory, to algebraic geometry, to low dimensional topology, to dynamics. Even to this colloquium!
In this talk, I will give a survey of some of the highlights from this beautiful world, focusing on how mapping class groups interact with covering spaces of surfaces. In particular, we will see how a particular order 2 element (the hyperelliptic involution) and its centraliser (the hyperelliptic mapping class group) play an important role, both within the world of mapping class groups and in other areas of mathematics. If time permits, I will briefly touch on some recent joint work with Rebecca Winarski that generalises the hyperelliptic story.
No experience with mapping class groups will be assumed, and this talk will be aimed at a general mathematics audience.
B. Gordon (1961) defined sequenceable groups and G. Ringel (1974) defined R-sequenceable groups. Friedlander, Gordon and Miller conjectured that finite abelian groups are either sequenceable or R-sequenceable. The preceding definitions are special cases of what T. Kalinowski and I are calling an orthogonalizeable group, namely, a group for which every Cayley digraph on the group admits either an orthogonal directed path or an orthogonal directed cycle. I shall go over the history and current status of this topic along with a discussion about the completion of a proof of the FGM conjecture.
Start by placing piles of indistinguishable chips on the vertices of a graph. A vertex can fire if it's supercritical; i.e., if its chip count exceeds its valency. When this happens, it sends one chip to each neighbour and annihilates one chip. Initialize a game by firing all possible vertices until no supercriticals remain. Then drop chips one-by-one on randomly selected vertices, at each step firing any supercritical ones. Perhaps surprisingly, this seemingly haphazard process admits analysis. And besides having diverse applications (e.g., in modelling avalanches, earthquakes, traffic jams, and brain activity), chip-firing reaches into numerous mathematical crevices. The latter include, alphabetically, algebraic combinatorics, discrepancy theory, enumeration, graph theory, stochastic processes, and the list could go on (to zonotopes). I'll share some joint work—with Dave Perkins—that touches on a few items from this list. The talk'll be accessible to non-specialists. Promise!
I am now refereeing a manuscript on the above and I’ll tell you about its contents.
This week I shall finish my discussion of sequenceable and R-sequenceable groups.
A metric generator is a set W of vertices of a graph G such that for every pair of vertices u,v of G, there exists a vertex w in W with the condition that the length of a shortest path from u to w is different from the length of a shortest path from v to w. In this case the vertex w is said to resolve or distinguish the vertices u and v. The minimum cardinality of a metric generator for G is called the metric dimension. The metric dimension problem is to find a minimum metric generator in a graph G. In this talk I am discussing about the metric dimension and partition dimension of Cayley (di)graphs.
Lehmer's famous question concerns the existence of monic integer coefficient polynomials with Mahler measure smaller than a certain constant. Despite significant partial progress, the problem has not been fully resolved since its formulation in 1933. A powerful result independently proven by Lawton and Boyd in the 1980s establishes a connection between the classical Mahler measure of single variable polynomials and the generalized Mahler measure of multivariate polynomials. This led to speculation that it may be possible to answer Lehmer's question in the affirmative with a multivariate polynomial although the general consensus among researchers today is that no such polynomial exists. We show that each possible candidate among two variable polynomials corresponding to curves of genus 1 can be bi-rationally mapped onto a polynomial with Mahler measure greater than Lehmer's constant. Such bi-rational maps are expected to preserve the Mahler measure for large values of a certain parameter.
Milutin is a completing Honours Student of Wadim Zudilin.
In 2000, after investigating the published literature(for which I had reason then), I realised that there was clearly confusion surrounding the question of how WW2 Japanese army and navy codes had been broken by the Allies.
Fourteen years later, my academic colleague Peter Donovan and I understood why that was so: the archival documents needed to perform this task, plus the mathematical understanding needed to interpret correctly these documents, had only exposed themselves through our combined researches over this long period. The result, apart from a number of research publications in journals, is our book, "Code Breaking in the Pacific", published by Springer International in 2014.
Both the Imperial Japanese Army (IJA) and the Imperial Japanese Navy (IJN) used an encryption system involving a code book and then a second stage encipherment, a system which we call an additive cipher system, for their major codes – not a machine cipher such as the Enigma machines used widely by German forces in ww2 or the Typex/Sigaba/ECM machines used by the Allies. Thus, the type of attack needed to crack such a system is very different to those described in books about Bletchley Park and its successes against Enigma ciphers.
However, there is a singular difference: while the IJN’s main coding system, known to us as JN-25, was broken from its inception and throughout the Pacific War, yielding for example the intelligence information that enabled the battles of the Coral Sea and Midway to occur, or the shooting down of Admiral Yamamoto to be planned, the many IJA coding systems in use were, with one exception, never broken!
I will describe the general structure of additive systems, the rational way developed to attack them and its usual failure in practice, and the "miracle" that enabled JN-25 to be broken - probably the best-kept secret of the entire Pacific War: multiples of three! Good maths, but not highly technical!
Zero forcing number, Z(G), of a graph G is the minimum cardinality of a set S of black vertices (whereas vertices in V (G)\S are colored white) such that V (G) is turned black after finitely many applications of "the color-change rule": a white vertex is converted black if it is the only white neighbor of a black vertex.
Zero forcing number was introduced by the "AIM Minimum Rank – Special Graphs Work Group". In this talk, I present an overview of the results obtained from their paper.
In 1935 Erdos and Szekeres proved that there exists a function f such that among f(n) points in the plane in general position there are always n that form the vertices of a convex n-gon. More precisiely, they could prove a lower and an upper bound for f(n) and conjectured that the lower bound is sharp. After 70 years with very limited progress, there have been a couple of small improvements of the upper bound in recent years, and finally last month Andrew Suk announced a huge step forward: a proof of an asymptotic version of the conjecture.
I plan two talks on this topic: (1) a brief introduction to Ramsey theory, and (2) an outline of Suk's proof.
I continue the discussion of the Erdos-Szekeres conjecture about points in convex position with an outline of the recent proof of an asymptotic version of the conjecture.
The density of 1's in the Kolakoski sequence is conjectured to be 1/2. Proving this is an open problem in number theory. I shall cast the density question as a problem in combinatorics, and give some visualisations which may suggest ways to gain further insight into the conjecture.
The finite element method is a very popular technique to approximate solutions of partial differential equations. The mixed finite element method is a type of finite element method in which extra variables are introduced in the formulation. This introduction is useful for some problems where more than one unknowns are desirable. In this research, we will apply the mixed finite element method for some applications, such as Poisson equation, elasticity equation, and sixth-order problem. Furthermore, we also utilise the mixed finite element method to solve linear wave equation which arises from real world problem.
Let $\Sigma_d^{++}(\R)$ be the set of positive definite matrices with determinant 1 in dimension $d\ge 2$. Identifying two $SL_d(\Z)$-congruent elements in $\Sigma_d^{++}(\R)$ gives rise to the space of reduced quadratic forms of determinant one, which in turn can be identified with the locally symmetric space $X_d:=SL_d(\Z)\backslash SL_d(\R)\slash SO_d(\R)$. Equip the latter space with its natural probability measure coming from the Haar measure on $SL_d(\R)$. In 1998, Kleinbock and Margulis established very sharp estimates for the probability that an element of $X_d$ takes a value less than a given real number $\delta>0$ over the non-zero lattice points $\Z^d\backslash\{ \bm{0} \}$.
This talk will be concerned with extensions of such estimates to a large class of probability measures arising either from the spectral or the Cholesky decomposition of an element of $\Sigma_d^{++}(\R)$. The sharpness of the bounds thus obtained are also established for a subclass of these measures.
This theory has been developed with a view towards application to Information Theory. Time permitting, we will briefly introduce this topic and show how the estimates previously obtained play a crucial role in the analysis of the perfomance of communication networks.
This is work joint with Evgeniy Zorin (University of York). Dr Adiceam is a visitor of Dr Mumtaz Hussain.
The standard height function $H(\mathbf p/q) = q$ of simultaneous approximation can be calculated by taking the LCM (least common multiple) of the denominators of the coordinates of the rational points: $H(p_1/q_1,\ldots,p_d/q_d) = \mathrm{lcm}(q_1,\ldots,q_m)$. If the LCM operator is replaced by another operator such as the maximum, minimum, or product, then a different height function and thus a different theory of simultaneous approximation will result. In this talk I will discuss some basic results regarding approximation by these nonstandard height functions, as well as mentioning their connection with intrinsic approximation on Segre manifolds using standard height functions. This work is joint with Lior Fishman.
Dr Simmons is a visitor of Dr Mumtaz Hussain.
Come join us for a discussion and public forum on 'Creativity & Mathematics' at Newcastle Museum on Monday, 1st August. We've lined up world leading experts from a diverse set of disciplines to shed some light on the connection between creativity and mathematics.
It's free, but please register for catering purposes. It begins at 6:30 pm with finger food and a chat before the forum itself gets under way at 7 pm.
The panel discussion and forum will have lots of audience involvement. The panel members are from a diverse group of disciplines each concerned in some way with the relationship between creativity and mathematics. Prof. John Wilson (The University of Oxford), a leading expert on group theory, is intrigued by the similarities between mathematicians finding new ideas and composers creating new music. Prof. George Willis (University of Newcastle) will talk about the creativity of mathematics itself. Prof. Michael Ostwald will spin gold around mathematical constraints and architectural forms. A/Prof. Phillip McIntyre is an international expert on creativity and author of The Creative System in Action. He has been described as having a mind completely unpolluted by mathematics!
Come along and enjoy an evening of mental stimulation and unexpected insights. You never know: participants might walk away with a completely different view of mathematics and its place in the world.
The formation of high-mass stars (> 8 times more massive than our sun) poses an enormous challenge in modern astrophysics. Theoretically, it is difficult to understand whether the final mass of a high-mass star is accreted locally or from afar. Observationally, it is difficult to observe the early cold stages because they have relatively short lifetimes and also occur in very opaque molecular clouds. These early stages, however, can be probed by emission from molecular lines emitting at centimetre, millimetre, and sub-millimetre wavelengths. Our recent work clearly demonstrates that dense molecular clumps embedded in the filamentary "Infrared Dark Clouds" spawn high-mass stars, and these the clumps evolve as star-formation activity progresses within them. We have now identified hundreds of clumps in the earliest "pre-stellar" stage. Our MALT90 and RAMPS surveys reveal that these clumps are collapsing, confirming a prediction from "competitive accretion" models. New observations with the ALMA telescope demonstrate that turbulence--and not gravity--dominates the structure of "the Brick", the Milky Way's most massive "pre-stellar" clump.
We discuss ongoing work in convex and non-convex optimization. In the convex setting, we use symbolic computation to study problems which require minimizing a function subject to constraints. In the non-convex setting, we use a variety of computational means to study the behavior of iterated Douglas-Rachford method to solve feasibility problems, finding an element in the intersection of several sets.
For over 25 years, Wolfram Research has been serving Educators and Researchers. In the past 5 years, we have introduced many award winning technology innovations like Wolfram|Alpha Pro, Wolfram SystemModeler, Wolfram Programming Lab, and Natural Language computation. Join Craig Bauling as he guides us through the capabilities of Mathematica. Craig will demonstrate the key features that are directly applicable for use in teaching and research. Topics of this technical talk include
Prior knowledge of Mathematica is not required - new users are encouraged. Current users will benefit from seeing the many improvements and new features of Mathematica 11.
Konig (1936) asked whether every finite group G is realized as the automorphism group of a graph. Frucht answered the question in the affirmative and his answer involved graphs whose orders were substantially bigger than the orders of the groups leading to the question of finding the smallest graph with a fixed automorphism group. We shall discuss some of the early work on this problem and some recent results for the family of dihedral groups.
Many governments and international finance organisations use a carbon price in cost-benefit analyses, emissions trading schemes, quantification of energy subsidies, and modelling the impact of climate change on financial assets. The most commonly used value in this context is the social cost of carbon (SCC). Users of the social cost of carbon include the US, UK, German, and other governments, as well as organisations such as the World Bank, the International Monetary Fund, and Citigroup. Consequently, the social cost of carbon is a key factor driving worldwide investment decisions worth many trillions of dollars.
The social cost of carbon is derived using integrated assessment models that combine simplified models of the climate and the economy. One of three dominant models used in the calculation of the social cost of carbon is the Dynamic Integrated model of Climate and the Economy, or DICE. DICE contains approximately 70 parameters as well as several exogenous driving signals such as population growth and a measure of technological progress. Given the quantity of finance tied up in a figure derived from this simple highly parameterized model, understanding uncertainty in the model and capturing its effects on the social cost of carbon is of paramount importance. Indeed, in late January this year the US National Academies of Sciences, Engineering, and Medicine released a report calling for discussion on the various types of uncertainty in the overall SCC estimation approach and addressing how different models used in SCC estimation capture uncertainty.
This talk, which focuses on the DICE model, essentially consists of two parts. In Part One, I will describe the social cost of carbon and the DICE model at a high-level, and will present some interesting preliminary results relating to uncertainty and the impact of realistic constraints on emissions mitigation efforts. Part one will be accessible to a broad audience and will not require any specific technical background knowledge. In Part Two, I will provide a more detailed description of the DICE model, describe precisely how the social cost of carbon is calculated, and indicate ongoing developments aimed at improving estimates of the social cost of carbon.
This week we shall continue by introducing the cast of characters to be used for producing minimal-order graphs with dihedral automorphism group.
Maintenance plays a crucial role in the management of rail infrastructure systems as it ensures that infrastructure assets (e.g., tracks, signals, and rail crossings) are in a condition that allows safe, reliable, and efficient transport. An important and challenging problem facing planners is the scheduling of maintenance activities which must consider the movement and availability of the maintenance resources (e.g., equipment and crews). The problem can be viewed as an inventory routing problem (IRP) in which vehicles deliver product to customers so as to ensure that the customers have sufficient inventory to meet future demand. In the case of rail maintenance, the customers are the infrastructure assets, the vehicles correspond to the resources used to perform the maintenance, and the product that is in demand, the inventory of which is replenished by the vehicle, is the asset condition. To the best of our knowledge, such a viewpoint of rail maintenance has not been previously considered.
In this thesis we will study the IRP in the rail maintenance scheduling context. There are several important differences between the classical IRP and our version of the problem. Firstly, we need to differentiate between stationary and moving maintenance. Stationary maintenance can be thought of having demand for product at a specific location, or point, while moving maintenance is more like the demand for product being distributed along a line between two points. Secondly, when performing maintenance, trains may be subject to speed restrictions, be delayed, or be rerouted, all of which affect the infrastructure assets and their condition differently. Finally, the long-term maintenance schedules that are of interest are developed annually. IRPs with such a long planning horizon are intractable to direct solution approaches and therefore require the development of customised solution methodologies.
I am studying the complexity of solving equations over different algebraic objects, like free groups, virtually free groups, and hyperbolic groups. We have an NSPACE(n log n) algorithm to find solutions in free groups, which I will try to briefly explain. Applications include pattern recognition and machine learning, and first order theories in logic.
Learning to rank is a machine learning technique broadly used in many areas such as document retrieval, collaborative filtering or question answering. We present experimental results which suggest that the performance of the current state-of-the-art learning to rank algorithm LambdaMART, when used for document retrieval for search engines, can be improved if standard regression trees are replaced by oblivious trees. This paper provides a comparison of both variants and our results demonstrate that the use of oblivious trees can improve the performance by more than 2:2%. Additional experimental analysis of the inuence of a number of features and of a size of the training set is also provided and confirms the desirability of properties of oblivious decision trees.
About the Speaker: Dr Michal Ferov is a Postdoctoral Research Fellow in the School of Mathematical and Physical Sciences,Faculty of Science and Information Technology.
This afternoon (31 October) we shall complete the discussion about vertex-minimal graphs with dihedral automorphism groups. I have attached an outline of what was covered in the first two weeks.
In this talk I will present a class of C*-algebras known as "generalised Bunce-Deddens algebras" which were constructed by Kribs and Solel in 2007 from directed graphs and sequences of natural numbers. I will present answers to questions asked by Kribs and Solel about the simplicity and the classification of these C*-algebras. These results are from my PhD thesis supervised by Dave Robertson and Aidan Sims.
Today's discrete mathematics seminar is dedicated to Mirka Miller. I am going to present the beautiful Hoffman-Singleton (1964) paper which established the possible values for valencies for Moore graphs of diameter 2, gave us the Hoffman-Singleton graph of order 50, and gave us one of the intriguing still unsettled problems in combinatorics. The proof is completely linear algebra and is a proof that any serious student in discrete mathematics should see sometime. This is the general area in which Mirka made many contributions.
Targeted Audience: All early career staff and PhD students; other staff welcome
Abstract: Many of us have been involved in discussions revolving around the problem of choosing suitable thesis topics and projects for post-graduate students, honours students and vacation research students. The panel is going to present some ideas that we hope people in the audience will find useful as they get ready for or continue with their careers.
About the Speakers: Professor Brian Alspach has supervised thirteen PhDs, twenty-five MScs, nine post-doctoral fellows and a dozen undergraduate scholars over his fifty-year career. Professor Eric Beh has 20 years' international experience in the analysis of categorical data with a focus on data visualisation. He has and has, or currently is, supervised about a 10 PhD students. Dr Mike Meylan has twenty years research experience in applied mathematics both leading projects and working with others. He has supervised 5 PhD students and three post-doctoral fellows.
For a two-coloring of the vertex set of a simple graph $G$ consider the following color-change rule: a red vertex is converted to blue if it is the only red neighbor of some blue vertex. A vertex set $S$ is called zero-forcing if, starting with the vertices in $S$ blue and the vertices in the complement $V \setminus S$ red, all the vertices can be converted to blue by repeatedly applying the color-change rule. The minimum cardinality of a zero-forcing set for the graph $G$ is called the zero-forcing number of $G$, denoted by $Z(G)$.
There is a conjecture connecting zero forcing number, minimum degree $d$ and girth $g$ as follows: "If G is a graph with girth $g \geq 3$ and minimum degree $d \geq 2$, then $Z(G) \geq d+ (d-2)(g-3)$".
I shall discuss a recent paper where the conjecture is proved to be true for all graphs with girth \leq 10.
A challenge with our large-enrolment courses is to manage assessment resources: questions, quizzes, assignments and exams. We want traditional in-class assessment to be easier, quicker and more reliable to produce, in particular where multiple versions of each assessment is required. Our approach is to
We have implemented this within standard software: LaTeX, Ruby, git, and our favourite mathematics software.
We will briefly show off our achievements in 2016, including new features of the software and how we've used them in our teaching. We then invite discussion on we we can do to help our colleagues use these tools.
The Australian Council on Healthcare Standards collates data on measures of performance in a clinical setting in six-month periods. How can these data best be utilised to inform decision-making and systems improvement? What are the perils associated with collecting data in six-month periods, and how may these be addressed? Are there better ways to analyse, report and guide policy?
The Council for Aid to Education is one of many organisations internationally attempting to assess tertiary institutional performance. Value-add modelling is a technique intended to inform system performance. How valid and reliable are these techniques? Can they be improved?
Educational techniques and outreach activities are employed across the education system and the wider community for the purposes of increasing access, equity and understanding.
When new concepts are formed, a well-designed instrument to assess and provide evidence of their performance is required. Does immersion in professional experience activity enable pre-service teachers to achieve teaching standards? Do engagement activities for schools in remote and rural areas increase students’ aspirations and engagement with tertiary institutions?
Forensic anthropologists deal with the collection of bones and profiling individuals based on the remains found. How can statistics inform such decision-making?
Such questions and existing and potential answers will be discussed in the context of research collaborations with Taipei Medical University (Taiwan), Health Services Research Group, Australian Council on Healthcare Standards, Hunter Medical Research Institute, School of Education, Wollotuka Institute, School of Environmental Sciences and a Forensic Anthropologist.
Some Engel words and also commutators of commutators can be expressed as products of powers. I discuss recent work of Colin Ramsay in this area, using PEACE (Proof Extraction After Coset Enumeration), and in particular provide expressions for commutators of commutators as short products of cubes.
I will discuss how to solve free group equations using a practical computer program. Ciobanu, Diekert and Elder recently gave a theoretical algorithm which runs in nondeterministic space $n\log n$, but implementing their method as an actual computer program presents many challenges, which I will describe.
Incremental stability describes the asymptotic behavior between any two trajectories of dynamical systems. Such properties are of interest, for example, in the study of observers or synchronization of chaos. In this paper, we develop the notions of incremental stability and incremental input-to-state stability (ISS) for discrete-time systems. We derive Lyapunov function characterizations for these properties as well as a useful summation-to-summation formulation of the incremental stability property.
A graph labeling is an assignment of integers to the vertices or edges, or both, subject to certain conditions. These conditions are usually expressed by on the basis of the weights of some evaluating function. Based on these conditions there are several types of graph labelings such as graceful, magic, antimagic, sum and irregular labeling. In this research, we looking at the H-supermagic labeling of firecracker, banana tree, flower and grid graphs; the exclusive sum labelling of trees; and the edge irregularity strength of the grid graphs.