More 2010 events can be found on the "older events" page.
A continuation of last week's talk.
We consider multi-objective convex optimal control problems. First we state a relationship between the (weakly or properly) efficient set of the multi-objective problem and the solution of the problem scalarized via a convex combination of objectives through a vector of parameters (or weights). Then we establish that (i) the solution of the scalarized (parametric) problem for any given parameter vector is unique and (weakly or properly) efficient and (ii) for each solution in the (weakly or properly) efficient set, there exists at least one corresponding parameter vector for the scalarized problem yielding the same solution. Therefore the set of all parametric solutions (obtained by solving the scalarized problem) is equal to the efficient set. Next we consider an additional objective over the efficient set. Based on the main result, the new objective can instead be considered over the (parametric) solution set of the scalarized problem. For the purpose of constructing numerical methods, we point to existing solution differentiability results for parametric optimal control problems. We propose numerical methods and give an example application to illustrate our approach. This is joint work with Henri Bonnel (University of New Caledonia).
This talk will introduce fractal transformations and some of their remarkable properties. I will explain the mathematics that sustains them and how to construct them in simple cases. In particular I hope to demonstrate a very recent result, showing how they can be applied to generate convenient mutually-singular measures that enable the storage of multiple images within a single image. The talk will include some beautiful computer graphics.
I will describe four recent theorems, developed jointly with Andrew Vince and David C. Wilson (both of the University of Florida) that reveal a surprisingly rich theory associated with an attractor of a projective iterated function system (IFS). The first theorem characterizes when a projective IFS has an attractor that avoids a hyperplane. The second theorem establishes that a projective IFS has at most one attractor. In the third theorem the classical duality between points and hyperplanes in projective space leads to connections between attractors that avoid hyperplanes and repellers that avoid points and an associated index, which is a nontrivial projective invariant, is defined. I will link these results to the Conley decomposition theorem.
Hyperbolic geometry will be introduced, and visualised via the Cinderella software suite. Simple constructions will be performed and compared and contrasted to with Euclidean geometry. Constructions and examples will be quite elementary. Audience participation, specifically suggestions for constructions to attempt, during the demonstration is actively encouraged.The speaker apologises in advance for not being nearly as knowledgeable of the subject as he probably ought to be.
Scenario Trees are compact representations of processes by which information becomes available. They are most commonly used when solving stochastic optimization problems with recourse, but they have many other uses. In this talk we discuss two uses of scenario trees: computing the value of hydroelectricity in a regulated market; and updating parameters of epidemiological models based on observations of syndromic data. In the former, we investigate the impact that the size and shape of the tree has on the dual price of the first stage demand constraint. In the latter, we summarize a simulation of epidemics and syndromic behaviors on a tree, then identify the subtree most likely to match observed data.
Mineral freight volume increases are driving transport infrastructure investments on Australia's east and west coasts. New and upgraded railways, roads and ports are planned or are under construction -- to serve new mines, processing facilities and international markets. One of the fastest growing regions is Northern Queensland, central to which is the so-called Northern Economic Triangle that has Rockhampton, Mt Isa and Townsville at its vertices. CSIRO has been working with Queensland Government to construct a new GIS-based infrastructure planning optimisation system that is known as the Infrastructure Futures Analysis Platform (IFAP). IFAP can be used to build long-term (eg. 25 year) plans for infrastructure development in regions such as the Northern Economic Triangle. IFAP consists of a commercial Geographic Information System (MapInfo), a database and a network optimisation solver that has been constructed by CSIRO and will ultimately by open-sourced. The prototype IFAP is nearing completion and in this presentation I will discuss the development process and the underlying network optimisation problem.
Joint work with Kim Levy, Andreas Ernst, Gaurav Singh, Stuart Woodman, Andrew Higgins, Leorey Marquez, Olena Gavriliouk and Dhananjay Thiruvady
Nonconvex/nonsmooth phenomena appear naturally in many complex systems. In static systems and global optimization problems, the nonconvexity usually leads to multi-solutions in the related governing equations. Each of these solutions represents certain possible state of the system. How to identify the global and local stability and extremality of these critical solutions is a challenge task in nonconvex analysis and global optimization. The classical Lagrangian-type methods and the modern Fenchel-Moreau-Rockafellar duality theories usually produce the well-known duality gap. It turns out that many nonconvex problems in global optimization and computational science are considered to be NP-hard. In nonlinear dynamics, the so-called chaotic behavior is mainly due to nonconvexity of the objective functions. In nonlinear variational analysis and partial differential equations, the existence of nonsmooth solutions has been considered as an outstanding open problem.
In this talk, the speaker will present a potentially useful canonical duality theory for solving a class of optimization and control problems in complex systems. Starting from a very simple cubic nonlinear equation, the speaker will show that the optimal solutions for nonconvex systems are usually nonsmooth and cannot be captured by traditional local analysis and Newton-type methods. Based on the fundamental definitions of the objectivity and isotropy in continuum physics, the canonical duality theory is naturally developed, and can be used for solving a large class of nonconvex/nonsmooth/discrete problems in complex systems. The results illustrate the important fact that smooth analytic or numerical solutions of a nonlinear mixed boundary-value problem might not be minimizers of the associated variational problem. From a dual perspective, the convergence (or non-convergence) of the FDM is explained and numerical examples are provided. This talk should bring some new insights into nonconvex analysis, global optimization, and computational methods.
We shall continue from last week, performing more hyperbolic constructions, as well as some elliptic constructions for comparison. In particular, some alternating projection algorithms will be explored in hyperbolic space. With some luck we shall confound some more of our euclidean intuitions. Audience participation is actively encouraged.
Machine learning problems are a particularly rich source of applications for sparse optimization, giving rise to a number of formulations that require specialized solvers and structured, approximate solutions. As case studies, we discuss two such applications - sparse SVM classification and sparse logistic regression - and present algorithms that are assembled from different components, including stochastic gradient methods, random approximate matrix factorizations, block coordinate descent, and projected Newton methods. We also describe a third (distantly related) application to selection of captive breeding populations for endangered species using binary quadratic programming, a project started during a visit to Newcastle in June 2009.
CARMA reflects changes in the mathematical research being undertaken at Newcastle. Mathematics is "the language of high technology" which underpins all facets of modern life, while current Information and Communication Technology (ICT) has become ubiquitous. No other research centre exists which focuses on the implications of developments in ICT, present and future, for the practice of research mathematics. CARMA fills this gap through the exploitation and development of techniques and tools for computer-assisted discovery and disciplined data-mining including mathematical visualization. Advanced mathematical computation is equally essential to solution of real-world problems; sophisticated mathematics forms the core of software used by decision-makers, engineers, scientists, managers and those who design, plan and control the products and systems which are key to present-day life.
This talk is aimed at being accessible to all.
Sebastian will be presenting Lagrangian Relaxation and the Cost Splitting Dual on Monday July 5 at 4pm. He will discuss when the Cost Splitting Dual can provide a better bound than the Lagrangian Relaxation or regular linear relaxation and apply the theory learned to an example.
In Euclidean space the medians of a triangle meet at a point that divides each median in the ratio 2 to 1. That point is called the centroid. Cinderella tells us that the medians of a triangle in hyperbolic space meet at a point, but the medians do not divide each other in any fixed ratio. What characterises that point? One answer is that it is the centre of mass of equal-mass particles placed at the vertices. I will outline how one can define the centre of mass of a set of particles (points) in a Riemannina manifold, and how one can understand this in terms of the exponential map. This centre of mass, or geometric mean, is sometimes called the Karcher mean (apparently first introduced by Cartan!). I will attempt to show what this tells us about the medians of a triangle.
In this talk we will present some results recently obtained in collaboration with B.F.Svaiter on maximal monotone operators in nonreflexive Banach spaces. The focus will be on the use of concept of convex representation of a maximal monotone operator for obtaining results on these operators of type: surjectivity of perturbations by duality mappings, uniqueness of the extension to the bidual, Brondsted-Rockafellar property, etc.
Several types of subdifferentials will be introduced on Riemannian manifolds. We'll show their properties and applications, including spectral functions, Borwein-Preiss variational principle, and distance functions.
We study the existence and approximation of fixed points of several types of Bregman nonexpansive operators in reflexive Banach spaces.
Please note the change in our usual day from Monday to Tuesday.
The Hu-Washizu formulation in elasticity is the mother of many different finite element methods in engineering computation. We present some modified Hu-Washizu formulations and their performance in removing locking effect in the nearly incompressible elasticity. The stabilisation of the standard Hu-Washizu formulation is used to obtain the stabilised nodal strain formulation or node-based uniform strain elements. However, we show that standard or stabilised nodal strain formulation should be modified to have a uniformly convergent finite element approximation in the nearly incompressible case.
The classical prolate spheroidal wavefunctions (prolates) arise when solving the Helmholtz equation by separation of variables in prolate spheroidal coordinates. They interpolate between Legendre polynomials and Hermite functions. In a beautiful series of papers published in the Bell Labs Technical Journal in the 1960's, they were rediscovered by Landau, Slepian and Pollak in connection with the spectral concentration problem. After years spent out of the limelight while wavelets drew the focus of mathematicians, physicists and electrical engineers, the popularity of the prolates has recently surged through their appearance in certain communication technologies. In this talk we discuss the remarkable properties of these functions, the ``lucky accident'' which enables their efficient computation, and give details of their role in the localised sampling of bandlimited signals.
In vehicle routing problems (VRPs), a fleet of vehicles must be routed to service the demands of a set of customers in a least-cost fashion. VRPs have been studied extensively by operations researchers for over 50 years. Due to their complexity, VRPs generally cannot be solved optimally, except for very small instances, so researchers have turned to heuristic algorithms that can generate high-quality solutions in reasonable run times. Along these lines, we develop novel integer programming-based heuristics for several different VRPs. We apply our heuristics to benchmark problems in the literature and report computational results to demonstrate their effectiveness.
We discuss a class of sums which involve complex powers of the distance to points in a two-dimensional square lattice and trigonometric functions of their angle. We give a general expression which permits numerical evaluation of members of the class of sums to arbitrary order. We use this to illustrate numerically the properties of trajectories along which the real and imaginary parts of the sums are zero, and we show results for the first two of a particular set of angular sums (denoted C(1, 4 m; s)) which indicate their density of zeros on the critical line of the complex exponent is the same as that for the product (denoted C(0, 1; s)) of the Riemann zeta function and the Catalan beta function. We then introduce a function which is the quotient of the angular lattice sums C(1, 4 m; s) with C(0, 1; s), and use its properties to prove that C(1, 4 m; s) obeys the Riemann hypothesis for any m if and only if C(0, 1; s) obeys the Riemann hypothesis. We furthermore prove that if the Riemann hypothesis holds, then C(1 ,4 m; s) and C(0, 1; s) have the same distribution of zeros on the critical line (in a sense made precise in the proof).
I will describe the history and some recent research on a subject with a remarkable pedigree.
We discuss the Hahn-Banach-Lagrange theorem, a generalized form of the Hahn--Banach theorem. As applications, we derive various results on the existence of linear functionals in functional analysis, on the existence of Lagrange multipliers for convex optimization problems, with an explicit sharp lower bound on the norm of the solutions (multipliers), on finite families of convex functions (leading rapidly to a minimax theorem), on the existence of subgradients of convex functions, and on the Fenchel conjugate of a convex function. We give a complete proof of Rockafellar's version of the Fenchel duality theorem, and an explicit sharp lower bound for the norm of the solutions of the Fenchel duality theorem in terms of elementary geometric concepts.
Computation of definite integrals to high precision (typically several hundred digit precision) has emerged as a particularly fruitful tool for experimental mathematics. In many cases, integrals with no known analytic evaluations have been experimentally evaluated (pending subsequent formal proof) by applying techniques such as the PSLQ integer relation algorithm to the output numerical values. In other cases, intriguing linear relations have been found in a class of related integrals, relations which have subsequently been proven as instances of more general results. In this lecture, Bailey will introduce the two principal algorithms used for high-precision integration, namely Gaussian quadrature and tanh-sinh quadrature, with some details on efficient computer implementations. He will also present numerous examples of new mathematical results obtained, in part, by using these methods.
In a subsequent lecture, Bailey will discuss the PSLQ algorithm and give the details of efficient multi-level and parallel implementations.
Historically, fossil fuels have been vital for our global energy needs. However climate change is prompting renewed interest in the role of fossil fuel production for our energy needs. In order to appropriately plan for our future energy needs, a new detailed model of fossil fuel supply is required. The modelling applied an algorithm-based approach to predict both supply and demand for coal, gas, oil and total fossil fuel resources. Total fossil fuel demand was calculated globally, based on world population and per capita demand; while production was calculated on a country-by-country basis and summed to obtain global production. Notably, production over the lifetime of a fuel source was not assumed to be symmetrical about a peak value like that depicted by a Hubbert curve. Separate production models were developed for mining (coal and unconventional oil) and field (gas and conventional oil) operations, which reflected the basic differences in extraction and processing techniques. Both of these models included a number of parameters that were fitted to historical production data, including: (1) coal for New South Wales, Australia; (2) gas from the North Sea, UK; and (3) oil from the North Sea, UK, and individual state data from the USA.
In this talk we will focus our attention on certain regularization techniques related to two operations involving monotone operators: point-wise sums of maximal monotone operators and pre-compositions of such operators with linear continuous mappings. These techniques, whose underlying idea is to obtain a bigger operator as a result, lead to two concepts of generalized operations--extended and variational sums of maximal monotone operators and, the corresponding to them, extended and variational compositions of monotone mappings with linear continuous operators. We will revue some of the basic results concerning these generalized concepts, as well as will present some recent important advances.
Given a pair of Banach spaces X and Y such that one is the dual of the other, we study the relationships between generic Fr´echet differentiability of convex continuous functions on Y (Asplund property), generic existence of linear perturbations for lower semicontinuous functions on X to have a strong minimum (Stegall variational principle), and dentability of bounded subsets of X (Radon-Nikod´ym Property).
The PSLQ algorithm is an algorithm for finding integer relations in a set of real numbers. In particular, if (x1, ..., xn) is a vector of real numbers, then PSLQ finds integers (a1, ..., an), not all zero, such that a1*x1 + a2*x2 + ... + an*xn = 0, if such integers exist. In practice, PSLQ finds a sequence of matrices B_n such that if x is the original vector, then the reduced vector y = x * B_n tends to have smaller and smaller entries, until one entry is zero (or a very small number commensurate with precision), at which point an integer relation has been detected. PSLQ also produces a sequence of bounds on the size of any possible integer, which bounds grow until either precision is exhausted or a relation has been detected.
The fundamental duality formula (see Zalinescu ”Convex Analysis in General Vector Spaces”, Theorem 2.7.1) is extended to functions mapping into the power set of a topological linear space with a convex cone which is not necessarily pointed. Pairs of linear functionals are used as dual variables instead of linear operators. The talk will consist of three parts. First, motivations and explanations are given for the infimum approach to set-valued optimization. It deviates from other approaches, and it seems to be the only way to obtain a theory which completely resembles the scalar case. In the second part, the main results are presented, namely the fundamental duality formula and several conclusions. The third part deals with duality formulas for set-valued risk measures, a cutting edge development in mathematical finance. It turns out that the proposed duality theory for set-valued functions provides a satisfying framework not only for set-valued risk measures, but also for no-arbitrage and superhedging theorems in conical market models.
An Asplund space is a Banach space which possesses desirable differentiability properties enjoyed by Euclidean spaces. Many characterisations of such spaces fall into two classes: (i) those where an equivalent norm possesses a particular general property, (ii) those where every equivalent norm possesses a particular property at some points of the space. For example: (i) X is an Asplund space if there exists an equivalent norm Frechet differentiable on the unit sphere of the space, (ii) X is an Asplund space if every equivalent norm is Frechet differentiable at some point of its unit sphere. In 1993 (F-P) showed that (i) X is an Asplund space if there exists an equivalent norm strongly subdifferentiable on the unit sphere of the space and in 1995 (G-M-Z) showed that (ii) X separable is an Asplund space if every equivalent norm is strongly subdifferentiable at a nonzero point of X. Problem: Is this last result true for non-separable spaces? In 1994 (C-P) showed (i) X is an Asplund space if there exists an equivalent norm with subdifferential mapping Hausdorff weak upper semicontinuous on its unit sphere. We show: (ii) X is an Asplund space if every continuous gauge on X has a point where its subdifferential mapping is Hausdorff weak upper semicontinuous with weakly compact image which is some way towards solving the problem.
It is an expository talk about (conjectural) hypergeometric evaluations of lattice sums
$F(a,b,c,d)=(a+b+c+d)^2\sum_{n_j=-\infty,\ j=1,2,3,4}^\infty \frac{(-1)^{n_1+n_2+n_3+n_4}}{(a(6n_1+1)^2+b(6n_2+1)^2+c(6n_3+1)^2+d(6n_4+1)^2)^2}$
which arise as the values of L-functions of certain elliptic curves.
Riemannian manifolds constitute a broad and fruitful framework for the development of different fields in mathematic, such as convex analysis, dynamical systems, optimization or mathematical programming, among other scientific areas, where some of its approaches and methods have successfully been extended from Euclidean spaces. The nonpositive sectional curvature is an important property enjoyed by a large class of differential manifolds, so Hadamard manifolds, which are complete simply connected Riemannian manifolds of nonpositive sectional curvature, have worked out a suitable setting for diverse disciplines.
On the other hand, the study of the class of nonexpansive mappings has become an active research area in nonlinear analysis. This is due to the connection with the geometry of Banach spaces along with the relevance of these mappings in the theory of monotone and accretive operators.
We study the problems that arise in the interface between the fixed point theory for nonexpansive type mappings and the theory of monotone operators in the setting of Hadamard manifolds. Different classes of monotone and accretive set-valued vector fields and the relationship between them will be presented, followed by the study of the existence and approximation of singularities for such vector fields. Then we analyze the problem of finding fixed points of nonexpansive type mappings and the connection with monotonicity. As a consequence, variational inequality and minimization problems in this setting will be discussed.
The term "closed form" is one of those mathematical notions that is commonplace, yet virtually devoid of rigor. And, there is disagreement even on the intuitive side; for example, most everyone would say that π + log 2 is a closed form, but some of us would think that the Euler constant γ is not closed. Like others before us, we shall try to supply some missing rigor to the notion of closed forms and also to give examples from modern research where the question of closure looms both important and elusive.
This talk accompanies a paper by Jonathan M. Borwein and Richard E. Crandall, to appear in the Notices of the AMS, which is available at http://www.carma.newcastle.edu.au/~jb616/closed-form.pdf
The term "closed form" is one of those mathematical notions that is commonplace, yet virtually devoid of rigor. And, there is disagreement even on the intuitive side; for example, most everyone would say that π + log 2 is a closed form, but some of us would think that the Euler constant γ is not closed. Like others before us, we shall try to supply some missing rigor to the notion of closed forms and also to give examples from modern research where the question of closure looms both important and elusive.
This talk accompanies a paper by Jonathan M. Borwein and Richard E. Crandall, to appear in the Notices of the AMS, which is available at http://www.carma.newcastle.edu.au/~jb616/closed-form.pdf.
The design of signals with specified frequencies has applications in numerous fields including acoustics, antenna beamforming, digital filters, optics, radar, and time series analysis. It is often desirable to concentrate signal intensity in certain locations and design methods for this have been intensively studied and are well understood. However, these methods assume that the specified frequencies consist of an interval of integers. What happens when this assumption fails is almost a complete mystery that this talk will attempt to address.
In the area of Metric Fixed Point Theory, one of the outstanding question was if the fixed point property implied reflexivity. This question was answered in the negative in 2008 by P.K.Lin, when he showed that certain renorm in the space of absolutely sumable sequences, had the fixed point property.
In this talk we will show a general way to renorm certain spaces in order to have the fixed point property. We also give general properties for a given Banach space to enjoy the f.p.p. And we will also show equivalences of geometrical properties to certain fixed point properties.
There are many algorithms with which linear programs (LPs) can be solved (Fourier-Motzkin, simplex, barrier, ellipsoid, subgradient, bundle, ...). I will provide a very brief review of these methods and their advantages and disadvantages. An LP solver is the main ingredient of every solution method.(branch&bound, cutting planes, column generation, ...) for (NP hard) mixed-integer linear programs (MIPs). What combinations of which techniques work well in practice? There is no general answer. I will show, by means of many practical examples from my research group (telecommunication, transport, traffic and logistics, energy, ...), how large scale LPs and MIPs are successfully attacked today.
The log-concavity of a sequence is a much studied concept in combinatorics with surprising links to many other mathematical fields. In this talk we discuss the stronger but much less studied notion of m-fold log-concavity which has recently recieved some attention after Boros and Moll conjectured that a "remarkable" sequence encountered in the integration of an inverse cubic is infinitely log-concave. In particular, we report on a recent result of Branden which implies infinite log-concavity of the binomial coefficients and other new developments. Examples and conjectures are promised. A PDF of the talk is available here.
Accurate computer recognition of handwritten mathematics offers to provide a natural interface for mathematical computing, document creation and collaboration. Mathematical handwriting, however, provides a number of challenges beyond what is required for the recognition of handwritten natural languages. For example, it is usual to use symbols from a range of different alphabets and there are many similar-looking symbols. Many writers are unfamiliar with the symbols they must use and therefore write them incorrectly. Mathematical notation is two-dimensional and size and placement information is important. Additionally, there is no fixed vocabulary of mathematical "words" that can be used to disambiguate symbol sequences. On the other hand there are some simplifications. For example, symbols do tend to be well-segmented. With these charactersitics, new methods of character recognition are important for accurate handwritten mathematics input.
An informal one-day workshop on Multi Zeta Values will be held on Wed 20th Oct, from 12.30 pm to 6:00 pm. There will be talks by Laureate Professor Jonathan Borwein (Newcastle), Professor Yasuo Ohno (Kinki University, Osaka), and A/Professor Wadim Zudilin (Newcastle), as well as by PhD students from the two universities, followed by a dinner. If you are interested in attending, please inform Juliane Turner Juliane.Turner@newcastle.edu.au so that we can plan for the event.
The Matching Polynomial is a topic in the area of mathematics, statistical physics (dimer-molomer problem) and chemistry (topological resonant energy). In this talk we will discuss the computation of matching polynomial and location of its roots. We show that the roots of matching generating polynomials of graphs are dense in (−∞, 0] and the roots of matching polynomials of graphs dense in (−∞,+∞) which answer a problem of Brown et. al. (see Journal of Algebraic Combinatorics, 19, 273–282, 2004). Some similar result in characteristic polynomial, independent polynomial and chromatic polynomial are also presented.
[Also speaking: Prof Weigen Yan]
We discuss the feasibility pump heuristic and we interpret it as a multi-start, global optimization algorithm that utilizes a fast local minimizer. The function that is minimized has many local minima, some of which correspond to feasible integral solutions. This interpretation suggests alternative ways of incorporating restarts one of which is the use of cutting planes to eliminate local optima that do not correspond to feasible integral solutions. Numerical experiments show encouraging results on standard test libraries.
We are looking at families of finite sets, more specifically subsets of [n]={1,2,...,n}. In particular, we are interested in antichains, that means no member of the family is contained in another one. In this talk we focus on antichains containing only sets of two different cardinalities, say k and l, and study the question what the smallest size of a maximal antichain is (maximal in the sense that it is impossible to add any k-set or l-set without violating the antichain property). This can be nicely reformulated as a problem in extremal (hyper)graph theory, looking similar to the Turán problem on the maximum number of edges in a graph without a complete subgraphs on l vertices. We sketch the solution for the case (k,l)=(2,3), conjecture an optimal construction for the case (k,l)=(2,4) and present some asymptotic bounds for this case.
The continued fraction:
$${\cal R}_\eta(a,b) =\,\frac{{\bf \it a}}{\displaystyle
\eta+\frac{\bf \it b^2}{\displaystyle \eta
+\frac{4{\bf \it a}^2}{\displaystyle \eta+\frac{9 {\bf \it b}^2}{\displaystyle \eta+{}_{\ddots}}}}}$$
enjoys attractive algebraic properties such as a striking arithmetic-geometric mean relation and elegant links with elliptic-function theory. The fraction presents a computational challenge, which we could not resist.
The continued fraction:
$${\cal R}_\eta(a,b) =\,\frac{{\bf \it a}}{\displaystyle \eta+\frac{\bf \it b^2}{\displaystyle \eta +\frac{4{\bf \it a}^2}{\displaystyle \eta+\frac{9 {\bf \it b}^2}{\displaystyle \eta+{}_{\ddots}}}}}$$
enjoys attractive algebraic properties such as a striking arithmetic-geometric mean relation and elegant links with elliptic-function theory. The fraction presents a computational challenge, which we could not resist.
In Part II will reprise what I need from Part I and focus on the dynamics. The talks are stored here.
We will give a brief overview and of the history of Ramanujan and give samplings of areas such as partitions, partition congruences, ranks, modular forms, and mock theta functions. For example: A partition of a positive number $n$ is a non-increasing sequence of positive integers whose sum is $n$. There are five partitions of the number four: 4, 3+1, 2+2, 2+1+1, 1,1,1,1. If we let $p(n)$ be the number of partitions of $n$, it turns out that $p(5n+4)\equiv \pmod{5}$. How does one explain this? Once the basics and context have been introduced, we will discuss new results with respect to mock theta functions and show how they relate to old and recent results.
What is a {\em random subgroup} of a group, and who cares? In a (non-abelian) group based cryptosystem, two parties (Alice and Bob) each choose a subgroup of some platform group "at random" -- each picks $k$ elements "at random" and takes the subgroup generated by their chosen elements.
But for some platform groups (like the braid groups, which were chosen first, being so complicated and difficult) a "random subgroup" is not so random after all. It turned out, pick $k$ elements of a braid group, and they will generate (almost always) a {\em free group} with your $k$ elements as the free basis. And if Alice and Bob are just playing with free groups, it makes their secrets easy to attack.
Richard Thompson's group $F$ is an infinite, torsion free group, with many weird and cool properties, but the one I liked for this project is that it has {\em no} free subgroups (of rank $>1$) at all, so a random subgroup of $F$ could not be free -- so what would it be?
This is joint work with Sean Cleary (CUNY), Andrew Rechnitzer (UBC) and Jeniffer Taback (Bowdoin).
In this talk we consider two-phase flow models in porous media as they occur in several applications like oil production, pollute transport or CO2-storage. After a general introduction, we focus on an enhanced model where the capillary pressure is rate-dependent. We discuss the consequences of this term for heterogeneous materials with and without entry pressure. In the case of entry pressures the problem can be reformulation as inequality constraint at the material interface. Suitable discretization schemes and solution algorithms are proposed and used in various numerical simulations.
The complete elliptic integrals of the first and second kinds (K(x) and E(x)) will be introduced and their key properties revised. Then, new and perhaps interesting results concerning moments and other integrals of K(x) and E(x) will be derived using elementary means. Diverse connections will be made, for instance with random walks and some experimental number theory.
See flyer [PDF]
Automorphism groups of locally finite trees form a significant class of examples of locally compact totally disconnected topological groups. In this talk I will discuss my honours research, which covered the various local properties of automorphism groups. I will provide methods of constructing such groups, in particular groups acting on regular trees, and discuss what conclusions we can make regarding the structure of these groups.
This talk will be an introduction to cycle decompositions of complete graphs in the context of Alspach's conjecture about the necessary and sufficient conditions for their existence. Several useful methods of construction based on algebra, graph products and modifying existing decompositions will be presented. The most up to date results on this problem will be mentioned and future directions of study may be suggested.
This talk will describe a number of dierent variational principles for self-adjoint
eigenvalue problems that arose from considerations of convex and nonlinear analysis.
First some unconstrained variational principles that are smooth analogues of the
classical Rayleigh principles for eigenvalues of symmetric matrices will be described. In
particular the critical points are eigenvectors and their norms are related to the eigenvalues
of the matrix. Moreover the functions have a nice Morse theory with the Morse indices
describing the ordering of the eigenvector.
Next an unconstrained variational principle for eigenfunctions of elliptic operators
will be illustrated for the classical Dirichlet Laplacian eigenproblem. The critical points
of this problems have a Morse theory that plays a similar role to the classical Courant-
Fischer-Weyl minimax theory.
Finally I will describe certain Steklov eigenproblems and indicate how they are used
to develop a spectral characterization of trace spaces of Sobolev fundtions.
Rational numbers can be represented in many different ways: as a fraction, as a M"obius function, as a 2x2 matrix, as a string of L's and R's, as a continued fraction, as a disc in the plane, or as a point in the lattice Z^2. Converting between the representations involves interesting questions about computation and geometry. The geometries that arise are hyperbolic, inversive, or projective.
More 2010 events can be found on the "older events" page.