EXAMENSARBETEN VID AVDELNINGEN FÖR MATEMATIK, LTH, 2007

Tillbaka till huvudsidan
Examensarbeten under 2009
Examensarbeten under 2008
Examensarbeten under 2007
Examensarbeten under 2006
Examensarbeten under 2005
Examensarbeten under 2004
Examensarbeten under 2003
Examensarbeten under 2002
Examensarbeten under 1999-2001
Examensarbeten -1998

Classification methods for coarse classification of On-line Chinese Character Recognition
Student: Martin Appelgren, Pi-02
Advisor: Kalle Åström, Jakob Sternby
In cooperation with: Zi Decuma
Date Finished: 2007-12-21
Abstract: Todays increased number of mobile devices, such as mobile phones and PADs, make it possible for the users to easily and quickly enter words and phrases into the devices by simply writing with a pen on the screen. This is a very time saving way of typing words compared to doing it by pressing buttons on the device, in particular for Chinese users. Unlike many other language, the Chinese language contains over 6000 characters which raises the demands on the recognition method due to users requirements on recognition speed and accuracy. This thesis considers different classification methods used in the so called coarse classification step of On-line Chinese characters recognition (OCCR), in order to improve the speed of recognition under the constraint of remaining recognition accuracy. As a first step, all the Chinese characters are clustered, which is an unsupervised classification problem, using a binary hierarchical clustering approach with two well-known clustering algorithms, Min-Max-Cut algorithm and Adaboost algorithm. This clustered characters are then used for training different classifiers, e.g. the nearest neighbour classifier and support vector machine classifier. All these classifiers can be implemented in the recognition engine representing the coarse classification step. The different methods have been tested on real data, to see what hitrates can be obtained at different levels of speedup. The results from training the classifiers is very satisfying and is feasible to implement in the recognition engine.

PDE-driven 3D Surface Reconstruction from Calibrated Image Sequences (PDE-styrd ytrekonstruktion i 3D från kalibrerade bildsekvenser)
Student: Magnus Wendt, F97
Advisor: Magnus Oskarsson, Kalle Åström
Abstract: This thesis deals with 3D recosntruction of objects given that multiple images of the objects with full camera matrices are available. First I show how the well known silhouette carving method can be accelerated using computer graphics hardware and then a more advanced method based on a levelset representation is explored and extended. Finally a new method based on the fast marching method is proposed. The underlying geometry is then tesselated using a novel tesselation algorithm based on repelling particles. The methods are tested on synthetic data generated from an OpenGL application and photographic images whose camera matrices were available. The quality of the generated 3D models is satisfactory if the geometry of the object is fairly simple and the models themselves can be viewed in the results section.

Automated Quality Evaluation of Digital Retinal Images (Automatiserad kvalitetbedömning av digitala ögonbottenbilder)
Student: Herman Bartling, D02
Advisor: Fredrik Kahl
In cooperation with: Karolinska institutet, Peter Wanger
Date Finished: 2007-10-10
Abstract: In this study a method for automated quality evaluation of digital retinal images is developed and tested. The quality evaluation is based on the sharpness and illumination properties of images. The outcome of the evaluation is presented in form of a quality grade. The primary aim for the evaluation is to provide decision support regarding assessment of image quality for retinal images. Retinal images are basically photographs depicting the inner posterior part of the eye which is where the retina is located. These images provide an efficient way to examine the retinal health of patients. Abnormalities which are linked to typical eye diseases are directly visible in retinal images and an examination is done by visual inspection of the images. A special camera (fundus camera) is used to capture retinal images. The modern fundus cameras are capable of producing high quality images. The problem is, however, that the optic environment of retinal imaging is challenging, this makes it complicated to produce images of high quality. Images of low quality are frequently captured and accepted for the succeeding examination procedure, which normally is carried through at a later occasion. When low quality images reach the examination procedure numerous problematic scenarios may occur, e.g. pathological abnormalities may pass undetected due to the low image quality. Independent on which scenario occur, the low quality images will have negative consequences both for the patient and the Health Care. The suggested quality evaluation method show good potential in providing an objective assessment which correlates well with differently adjusted levels of image quality. The method also show good reliability to handle retinal images of varying formats in a correct way.

Strain in the Normal Human Heart Assessed by Velocity Encoded MRI (Kartläggning av mekanisk töjning i det normala människohjärtat med hastighetskodad MRI)
Student: Helen Soneson Pi03
Advisor: Kalle Åström, Matematikcentrum
In cooperation with: Klinisk fysiologi, Einar Heiberg
Date Finished: 2007-10-09
Abstract: Regional and global function of the heart is a important aspect to analyze, both in treatment of heart disease but also in the understanding of the normal heart function. The imaging modality used for this purpose is in this thesis magnetic resonance imaging. The aim of this thesis is to build up a normal model of the movement of the heart, analyzed both by displacement and strain. This model is then used to analyze the correspondence between infarct regions and reduced wall motion in patients' hearts. The normal model was built up by velocity encoded two-dimensional MR images acquired in the 2, 3 and 4 chamber projection from 25 healthy adults. This model was then compared to strain and displacement in hearts with infarcts. The infarcted regions were well ded as having a lower strain compared to the normal model. The displacement in the infarcted hearts was also lower, but not only in the infarcted areas, but also for the whole heart. This makes the strain model a better model than the displacement model to locating dysfunctional regions in a heart.

Real Time Automatic License Plate Recognition in Video Streams(Realtids system för automatisk detektion och avläsning av nummerplåtar i bildsekvenser)
Student: Fredrik Trobro, F99
Advisor: Håkan Ardö, Kalle Åström
Date Finished: 2007-10-05
Abstract: In recent years there has been an increased commercial interest in systems for automatic license plate recognition. Some of the existing systems process single images only, some even requiring vehicles to stop in front of a gate so that a still image of good quality can be taken. This thesis presents an approach that makes it possible to process 25 images per second on a standard PC from 2004, allowing identification decisions to be based on information from several images instead of just a single one. Thus, the impact of a single bad image is reduced, and vehicles can be allowed to drive past the camera without hindrance. In order to reach the necessary processing speed, a simplified Stauffer/Grimson background estimation algorithm has been used, enabling the system to only search for license plates on objects that are in motion. The main method for finding license plates has been a computational-wise economical connected component labelling algorithm. A basic pixel-by-pixel comparison OCR algorithm has also been implemented. A real life test running for twelve days has shown the complete system to have a rate of successful identification at 80 %.

A Tool for Simulation of Diffusion in Emulsion (Ett simuleringsverktyg för diffusion i emulsioner)
Student: Petra Bratt D99
Advisor: Gunnar Sparr, Niklas Norén (SIK-institutet)
In cooperation with: SIK-institutet för livsmedel
Date Finished: 2007-09-27
Abstract: The most fundamental form of transport for molecules in chemical and biochemical systems is diffusion. The knowledge of how molecules diffuse through different materials is a matter of interest for, for example, manufacturers of food, medicine and pulp. Different techniques for studying diffusion, microscopy and NMR diffusometry, are combined by solving the diffusion equation directly in the structures using finite element methods. From the solution of the diffusion equation at different positions in the structure, it is possible to calculate the propagator which is the probability to find a particle at a certain position after a certain observation time. The propagator is then linked to the NMR diffusometry echo decay through the short gradient pulse approximation.

In particular the thesis aims at creating a tool for simulating diffusion in emulsions. The emulsions are described as different common geometries, such as spheres, connected to each other, where the particles can travel. For this purpose, we employ the commercial program FEMLAB (COMSOL Multiphysics). Furthermore a graphical user interface is created to simplify for users not so familiar with the FEMLAB environment.


Numerical Solution of Fokker-Planck Equations for Fiber Laydown (Numerisk lösning av Fokker-Planck ekvationer som modellerar fibrer)
Student: Philip Reuterswärd D01
Advisor: Kalle Åström, Axel Klar (ITWM)
In cooperation with: TU Kaiserslautern, ITWM
Date Finished: 2007-09-26
Abstract: A simplified Fokker-Planck model for the lay-down of fibers on a conveyor belt in the production process of non-wovens is investigated. It takes into account the motion of the fiber under the influence of turbulence. A semi-Lagrange scheme that accurately captures the fiber dynamics and conserves the mass is presented. The model is solved for different selections of model parameters to examine the qualitative behavior of the fiber lay-down process.

Automatic Fault Detection in Cheese using Computer Vision
Student: David Wrangborg
Advisor: Kalle Åström, Håkan Ardö
In cooperation with: Frans Bengt Nilsson, Skånemejerier och Ylva Ardö, Department of Food Science, University of Copenhagen
Date Finished: 2007-06-14
Abstract: In production of cheese with eyes (bubbles of CO$_2$ often referred to as holes) there are occasionally problems with cracks in the cheese. These cracks can pose a problem when cutting up the cheese and they will, even though they are harmless, cause the cheese to appear less attractive for the consumer. Therefore the cheese producing companies are interested in the microbiological reasons behind the cracks. As one step to find these statistics of when the cracks appear, what they look like and where on the cheese they are needs to be gathered.

This master thesis examines the possibility to use digital images taken when cutting up the cheese and automated image analysis to reach these statistics. This is done by taking a photo of the cut surface on all cheeses passing during the cutting-up process. The resulting images are then segmented by classifying the pixels as cheese or background according to their colour value and Bayes' Theorem. An ellipse is fitted to the cheese pixels. Everything inside the ellipse is considered to be cut surface and everything outside background and is therefore removed. The image is rectified to a top view. This allows the image to be taken with a bit tilted view.

Several different filters, designed to be sensitive to cracks, are applied on the rectified images. The output of these filters are used as features for the classification. The common classification algorithm Support Vector Machine is then trained on training data consisting of three classes, flat cheese, eyes and cracks. The resulting classification algorithm is then used to classify all the pixels in the images into those three classes.

The third interesting class, cracks, contains information about if there are any cracks and in that case how big, and where they are. The information can be specially treated with morphological operations to give a decision if the specific cheese has a crack of a certain size.

The results, from test on images taken at a live industrial environment, are promising but future development is needed for the method to be usable commercially.


Student: Johan Carlsson, F02
Advisor: Fredrik Kahl
Date Finished: 2007-04-16
Abstract: The term Image Mosaicing means putting together a number of images. This is of course the central part of digital panoramic image generation. In the early days, special hardware, such as wide angle cameras or tripods, was used when capturing suitable panorama images. The use of this hardware simplified the software calculations involved when producing the final panoramas. Today, as computers have grown more powerful, it is possible to create good quality panoramas from pictures taken with a handheld digital camera as well. Generation of digital panoramas is a versatile field and encompasses many problems, for which also many different solutions exist. This Thesis presents a system for panorama generation. It puts focus on different parts of the problem, and presents and compares a number of solutions for each part. In the end, the different techniques are put together and create a novel system that can build multi image panoramas. The essential parts that will be discussed are: deciding and applying a mapping that spatially transforms the input images into a final output panorama. The blending of the input images, so that the inter image seams become as vague as possible. The deghosting of moving objects that, in areas where images overlap, might occur in one, but not all images. In contribution to these areas, techniques such as pixel interpolation, image intensity filtering and posterior image correction will also be discussed.

Image Classification for Mobile Platforms
Student: Viktor Holtenäs Nygårdh, D02 and Anders Tidbeck, D02
Advisor: Fredrik Kahl and Magnus Oskarsson
In cooperation with: Tactel AB
Date finished: 2007-04-04
Abstract: In today's cellphones, and other mobile devices, we often find a camera. With the camera close at hand people use their cellphones in other ways than ten years ago. With the increasing number of photographs in the devices, comes the need for effective andintuitive ways to manage the collections of images. There are countless ways and ideason how to do this and we have studied a few possibilities. Our solution is built around the theory of Bag-of-Features and aims at keeping the cellphones' photoalbum sorted indifferent categories, such as forest, city and people. By maintaining this order, we hopeto make searching easier for the user than browsing pictures by filename or photodate.

Sustained Oscillations in Epidemiological Models
Student: Zacharias Enochsson, F99
Advisor: Mario Natiello
Date finished: 2007-03-30
Abstract: When deterministic models of density-dependent population-dynamical systems predict decaying oscillations, corresponding stochastic models display sustained oscillations. This difference can be explained by choosing a Lyapunov function for the deterministic system. Studying the expected tendencies of the stochastic system to grow or decrease in terms of the chosen Lyapunov function, one finds an unstable region around the fixed point of the deterministic system. In this work, the significance of the Lyapunov function is studied. In particular, how to choose the most informative one, and how to use it to approximate the oscillations. The study is conducted in the context of a particular epidemic model, but the generality of the method is also discussed.

Multi Picture Depth Estimation using Constrained Devices
Student: David Fredh, Pi02
Advisor: Magnus Oskarsson
In cooperation with: Scalado
Date finished: 2007-03-09
Abstract: Constrained devices such as mobile phones have become interesting as image products. This is caused by the built-in cameras, the internet connection and services like MMS and blogs. This thesis considers a method to perceive depth from two images of the same view, taken in a short range of time. The resulting depth-map may be used as an alpha-channel for several effects such as selective focus and other effects that may be enhanced by using the depth-information.The method uses the motion parallax effect from feature-points that are matched between the two images to determine the depth-map. A method to detect and describe salient features in the images using only the first four DCT-coefficients combined with the well known Harris-Stephens detector is proposed. It is compared to two other well known methods for image-registration, SIFT and SURF. The method works in the compressed domain only and the DCT-coefficients could be extracted directly from JPEG compressed images. Sub-pixel accuracy of the matched points is achieved using block-shifting inthe DCT-domain. The proposed DCT-based matching method is shown to be both fast and accurate. It is implemented in C and possible to use in a constrained device.

Single Image Focus Level Assessment Using Support Vector Machines
Student: Oscar Beijbom, F00
Advisor: Kalle Åström, Anders P Eriksson, Sven Hedlund (Cellavision) and Kent Stråhlén (Cellavision)
In cooperation with: Cellavision AB
Date finished: 2007-03-09
Abstract: Differential white blood cell count is the process of counting and classifying white blood cells in blood smears. It is one of the most common clinical tests which is performed in order to make diagnoses in conjunction with medical examinations. These tests indicatedeceases such as infections, allergies, and blood cancer and approximately 200-300 million are done yearly around the world.Cellavision AB has developed machines that automate this work and is the global leader in this market. The method developed in this thesis will replace and improve the auto focus routine in these machines. It makes it possible to capture a focused image in only two steps instead of using a iterative multi step algorithm like those used today in most auto focus systems, including the one currently used at Cellavision.In the proposed method a Support Vector Machine, SVM, is trained to assess quantatively, from a singel image, the level of defocus as well as the the direction of defocus for that image. The SVM is trained on features that measure both the image contrast and the image content. High precision is made possible through extracting features from the different parts of the image as well as from the image as a whole. This require the image to be segmented and a method for doing this is proposed.Using this method 99.5 % of the images in the test data's distances to focus were classified less or equal to 5 µm wrong while over 85 % were classified completely correctly. A 5 µm defocus is borderline to what the human eye perceives as defocused.Cellavision AB has applied for a patent to protect the method described in this thesis.

Detection, Segmentation and Recognition of Text on Road Signs
Student: Christoffer Möllenhoff, D02
Advisor: Magnus Oskarsson, G. Cross (Geospatial Vision Limited)
In cooperation with: Geospatial Vision Limited, Oxford
Date finished: 2007-02-26
Abstract: This thesis describes a technique for detecting and recognising text on road signs, from images captured by a moving van mounted with cameras at multiple angles.Both the backgrounds and the individual characters of the road signs are detected using Maximally Stable Extremal Regions. False positives are then discarded using a series of geometrical heuristics.Once the signs have been detected and triangulated, they are segmented and binarised before being passed on to an open source OCR system.As a final step, the recognition rate is improved by matching the recognised words against a dictionary with domain specific vocabulary.

Tillbaka till huvudsidan
Examensarbeten under 2009
Examensarbeten under 2008
Examensarbeten under 2007
Examensarbeten under 2006
Examensarbeten under 2005
Examensarbeten under 2004
Examensarbeten under 2003
Examensarbeten under 2002
Examensarbeten under 1999-2001
Examensarbeten -1998