|
|
Mathematical Statistics
Centre for Mathematical Sciences
Lund University with
Lund Institute of Technology
We offer both basic courses (one at the Faculty of Science and several, for different programmes, at the Lund Institute of Technology) and advanced courses.
Below you will find a list of all courses but the basic ones offered by the division. The basic courses contain elementary probability theory and basic statistics (point estimators, confidence intervals, tests, simple and multiple linear regression) in slightly different proportions in the different courses. For more information (in Swedish) about the basic courses, click here for courses given at the Faculty of Science and here for courses given at the Lund Institute of Technology.
In the list below, courses given at the Faculty of Science have codes that start with MAS, while courses given at the Lund Institute of Techology have codes that start with FMS. It should be noted that most advanced courses are offered at both faculties; they then have dual course codes. In two cases (FMS045/MASC04 and FMS180/MAS2C03) student taking the course at the Faculty of Science are required to pass an oral exam in addition to the written one while students at the Lund Institute of Technology are not, but this does also give extra course credit.
For information about degree projects click here and for information about (post)graduate courses click here.
You can get more information about the courses from their home pages (see below), from the lecturer for a particular course or from the two directors of studies: Anna Lindgren (Lund Institute of Technology) and Magnus Wiktorsson (Faculty of Science). The directors of studies can also answer more general questions.
Now, here is the list of courses:
This course gives further knowledge in probability theory. It deals with random variables in one and several dimensions, conditional distributions, moment generating functions and characteristic functions, multivariate normal distributions, quadratic forms, order statistics, convergence criteria for random variables, the Borel-Cantelli lemmas, convergence via transforms, the central limit theorem and strong law of large numbers. Poisson processes; conditioning on number of occurrences/occurence times, thinned and compound Poisson processes.
Lecturer: Tatyana Turova
Literature: A. Gut: An Intermediate Course in Probability Theory. Springer 1995.
Further information: See the course home page
The course gives more advanced knowledge of inference theory. It contains theory of exact methods; factorization theorem, exponential families, Rao-Blackwell's Theorem, ancillary estimators, Cramér Rao's inequality, Neyman-Pearson's lemma, permuation tests and interrelations between hypothesis testing and confidence intervals. Further considered are asymptotic methods; maximum likelihood estimators, standard errors, marginal, conditional and penalized likelihood and hypothesis testing according to the likelihood ratio, Wald ore scores method. The next topic is Baysian inference; estimators, hypothesis tests, confidence intervals and differences from the frequentist interpretation. Finally, some orientation is given about sequential tests and inference for finite populations.
Lecturer: Bengt Ringnér.
Literature: Young and Smith: Essential Statistical Inference,
Cambridge Univertity Press, 2005.
Further information: See the course home page
The course covers different aspects of Markov chains with countable states space and either discrete or continuous time: Stationary distributions, limit theorems, hitting times, Poisson processes, birth-and-death processes and applications.
Lecturer: Tatyana Turova
Literature: T. Rydén and G. Lindgren: Markovprocesser (in Swedish), Division of
Mathematical Statistics, Lund, 2002.
Further information: See the course home page.
Modelling of processes (some of) whose characteristics are time invariant. Important concepts are covariance, correlation and cross-correlation functions. Gaussian processes and white noise. Linear filters, autoregressive and moving average processes. Spectral density function, phase and amplitude spectra. Estimation of mean, covariance function and spectral density. Ergodicity. Frequency analysis. Signal-to-noise ratio. Matched filters and Wiener filter. Some properties of the Brownian motion.
Lecturers: Georg Lindgren.
Literature: G. Lindgren and H. Rootzén: Stationära Stokastiska
Processer (English translation exists), Division of Mathematical Statistics, Lund, 2004.
Further information: See the course home page.
The course gives theory and methodology of how to model, design and evaluate experiments. Important concepts are: Simple comparative experiments. Analysis of variance; transformations, model validation and residual analysis. Factorial design with fixed, random and mixed effects. Additivity and interaction. Complete and incomplete designs. Randomized block designs. Latin squares and confounding. Regression analysis and analysis of covariance. Response surface methodology. Off-line quality control and Taguchi methods.
Lecturer: TBA.
Literature: D.C. Montgomery: Design and Analysis of Experiments, Wiley, New York, 5th ed, 2000.
Further information: See the course home page
Statistical tools for risk assessment. Bayes theory. Weibull distribution and other extreme value distributions. Event intensities and the Poisson process. Statistical correlation and Monte Carlo-simulation. Use of means, standard deviations and quantiles for risk assessment. Risk computations, safety index, extrapolation of small risks.
Lecturer: TBA
Literature: Rychlik, I. and Ryden, J.:
Introduction to Probability and Risk Analysis. Lund 2002
Further information: See the course home page
The course extends and deepens basic knowledge of probability theory. Central topics are existence and uniqueness of measures defined on sigma fields, integration theory, Radon-Nikodyn derivatives and conditional expectation, weak convergence of probability measures on metric spaces.
Lecturer: Tatyana Turova.
Literature: A.N. Shiryaev: Probability, Springer 1996.
Further information: See the course home page.
Extreme value theory concerns mathematical modelling of extreme events. Recent developments have introduced very flexible and theoretically well motivated semi-parametric models for extreme values which now are at the stage where they can be used to address important technological problems on handling risks in areas such as wind engineering, hydrology, flood monitoring and prediction, climatic changes, structural reliability, corrosion modelling, and large insurance claims or large fluctuations in financial data (volatility). In many applications of extreme-value theory, predictive inference for unobserved events is the main interest. One wishes to make inference about events over a time period much longer than that for which data are available. For example, insurance companies are interested in the maximum amount of claims due to storm damage during, say, the next 30 years, based on data from the past 10--15 years. In bridge design a major factor is the maximum wind speed that can occur in any direction during the life of the bridge. However, the dataset used to estimate a return value for high wind speeds is often recorded over a much shorter time period than the expected lifetime of the bridge. Statistical modelling of extreme events has been subject of much practical and theoretical work in the last few years. The course will give an overview of a number of different topics in modern extreme value thoery including the following topics:
Lecturer: Nader Tajvidi.
Literature: Coles S. (2001) An Introduction to Statistical
Modelling of Extreme Values Springer-Verlag London
Further information: See the course home page.
The course treats modelling of stochastic systems using knowledge and data. Important concepts: Stationary and nonstationary processes, ARIMA processes, seasonal variation. Prediction, filtering and reconstruction in transfer function models and state space models. Parameter and structure estimation by least squares, maximum likelihood and predictive error methods. Spectral analysis, recursive estimation, adaptive techniques, robustness and outlier detection. Multivariate time series. Spectral density estimation.
Lecturer: Finn Lindgren
Literature: H. Madsen (2007): Time Series Analysis, Chapman & Hall/CRC.
Further information: See the course home page
The course is given jointly by the Division of Mathematical Statistics at Lund University and the Department of Informatics and Mathematical Modelling at the Technical University of Danmark (DTU), Kongens Lyngby, Denmark. The course treats advanced time series analysis, with a primary goal being to give a thorough knowledge on modelling of dynamic systems. A special attention is paid on non-linear and non-stationary systems, and the use of stochastic differential equations for modelling physical systems. The contents encompasses e.g. non-linear time series models, kernel estimators and time series analysis, identification and estimation in non-linear models, state space models, state filtering, particle filters, prediction in non-linear models, estimation of linear and (some) non-linear stochastic differential equations, parameter tracking in time series, experiment design for dynamic system identification.
Lecturer: Erik Lindström, LTH and Henrik Madsen, DTU.
Literature: H. Madsen and J. Holst (2002): Non-linear and non-stationary
time series analysis, Informatics and Mathematical Modelling, Technical
University of Denmark, Kongens Lyngby, Denmark.
Further information: See the course home page.
The course offers an overview of simulation and computational techniques for performing statistical inference. We study bootstrap and permutation based methods for testing statistical hypothesis and constructing confidence intervals. Markov chain Monte Carlo (MCMC) methods like the Gibbs sampler and the Metropolis-Hastings algorithm will be introduced and applied for solving problems in Bayesian statistics. A brief introduction to Bayesian statistics and hierarchical models will be given in the course. Finally we study the EM algorithm for computing maximum likelihood estimators in problems with missing/latent data.
Lecturer: TBA.
Literature: Lecture Notes.
Further information: See the course home page
The course deals with Bayesian methods for modelling, classification and reconstruction. Markov random fields. Gibbs distributions, deformable templates such as snakes and ballons. Correlation structures, multivariate techniques, discriminant analysis. Simualtion methods (MCMC). Three levels of image analysis: high level classification of objects, general shape reconstruction, and pixel analysis such as noise reduction and segmentation through pixel classification.
Lecturer: Finn Lindgren.
Literature: Lindgren F: Image modelling and estimation -- A statistical approach, Lund 2002.
Further information: See the course home page.
The course treats modelling and estimation in nonlinear dynamical stochastic
models for financial systems. The models may be expressed in continuous or
discrete time, and the modelling efforts include structure determination as well as
estimation of parameters. GARCH models in discrete time and models based on
stochastic differential equations in continuous time are two common models
classes. The course also discusses prediction, optimization, and risk evaluation
for systems with said descriptions. The participants will encounter Maximum
Likelihood and Moment methods for parameter estimation, kernel based
estimation methods, nonlinear filters for filtering and prediction, and they will in
addition meet with an introduction to bootstrap methods.
Lecturer: TBA.
Literature: Madsen, H., Nielsen, J.N. och Baadsgaard, M.: Statistics in Finance, IMM, DTU, Lyngby.
Further information: See the course home page.
Time period: Spring semester, both periods.
Examination: Written examination.
Requirements: A basic course in mathematical statistics and a course
in stochastic processes.
Language: If requested the course will be given in English.
This course consists of three integrated parts. In the first part we will consider option pricing in discrete time. The purpose is to define some important key words like arbitrage opportunities, completeness, martingales, and martingale measures. We will use trees to model the evolution of stock prices and information. In the second part, alternative models will be analysed. We will focus on a class of models in continuous time known as stochastic differential equations. In order to understand these models however some deeper knowledge of stochastic analysis is required. Among other things, we will study Brownian motions, stochastic integrals, and the Ito formula. Finally, in the third part we intend to focus towards economically relevant applications. We start by investigating once more, the theory of option pricing, and show how to derive the famous Black-Scholes formula. Thereafter, we turn to the bond market, and derive consistency relations for the prices of zero coupon bonds. To conclude we will draw attention to a variety of interesting applications within closely related fields such as: portfolio theory, investment under uncertainty, pricing under default risk, and numerical methods.
Lecturer: TBA.
T.Björk: Arbitrage Theory in Continuous Time, Oxford University Press, Oxford.
Lecture notes.
Further information: See the course
home page
Last modified: Thu Nov 27 09:56:01 CET 2008