Construction and Visualisation of Three-dimensional ... - CiteSeerX

23 downloads 2041 Views 530KB Size Report
Sep 30, 1999 - Facial features are extracted using a template of connected contours, adapted to each ... Email: [email protected]. Neil Duffy is a .... the pointed tip of each of the cephalostat's ear rods is a piece of lead-shot, and the midpoint ...
Construction and Visualisation of Three-dimensional Facial Statistics Bernard Tiddeman, Neil Duy and Graham Rabey September 30, 1999

Abstract

This paper presents a new method for the construction of three-dimensional probabilistic facial averages and demonstrates the potential for applications in clinical craniofacial research and patient assessment. Averages are constructed from a database of registered laser-range scans and photographic images using feature based image warping. Facial features are extracted using a template of connected contours, adapted to each subject interactively using snakes. Each subject's images are warped to the average template shape and the mean depth, colour and covariance matrix is found at each point. Statistical comparison of individuals with an average or between two averages is visualised by converting the probabilities to a coloured texture map.

Aliation of Authors

Bernard Tiddeman carried out the work contained in this paper while with the Interface Project, Department of Plastic Surgery, Royal Victoria Inrmary, Newcastle Upon Tyne, NE1 4LP, UK. He is now working with the Perception Laboratory, School of Psychology, University of St Andrews, Fife, KY16 9JP, UK. Telephone +44 1334 463044. Email: [email protected] Neil Duy is a Senior Lecturer in the Department of Computing and Electrical Engineering, Heriot-Watt University, Riccarton, Edinburgh, EH14 4AS, UK. Graham Rabey is with the Interface Project, Stoke Mandeville Hospital, Aylesbury, Bucks, HP21 8AL, UK.

Keywords

Facial statistics, facial averages, morphanalysis, 3D medical imaging.

1

1 Introduction The quantitative assessment of craniofacial patients can provide valuable information for clinical research, diagnosis and treatment planning. In recent years the increased availability of 3D data capture systems and powerful computer hardware has encouraged the design of new computer techniques for processing 3D data and the adoption of such techniques for both clinical and non-clinical applications. For the maximum patient benet, the uptake of new computer processing methods needs to be matched by the clinical demands of robustness, accuracy and repeatability. In this paper we incorporate several of the new methods from the computer graphic and 3D modelling literature into an established clinical system for craniofacial assessment. This results in a new method for the construction of standardised 3D facial averages and new methods for the statistical assessment of individual patients and patient groups.

2 Background Morphanalysis is a discipline that employs a standardised framework for quantitative craniofacial assessment [1] [2]. The Interface Research Project is concerned with studying facial shape using the principles of morphanalysis, which are implemented in a machine called an analytic morphograph. The Interface Project's analytic morphograph captures orthogonal radiographic and photographic images and has recently been enhanced with the addition of a laser-video surface 2

scanner [3]. This paper presents methods for extending the previous orthogonal two-dimensional morphanalytic work into the three-dimensional modality of laser-scanned depth data. The construction of average two-dimensional facial images has been advanced by Benson and Perrett et al [4] [5] [6] for the study of facial perception. In their method, a template of points was adapted manually to each subject in a database of registered facial images. The average position of each point in the template was found and this dened the average shape. Each subject was then warped into the average shape and averaged pixel by pixel to produce an average 2D image. In the medical imaging community Bookstein has developed a similar method using thin-plate splines for image warping and analysis [7]. A partially 3D method for averaging laser scanned facial surface data has been proposed by McCance et al [8]. Averages are constructed by orientating the data sets to a common reference frame using landmarks, resampling the cylindrical depth data in the new orientation and then nding the average and standard deviation depth at each pixel. Because the depth measurements are only compared radially (for cylindrical depth data) the approach cannot guarantee to match anatomical landmark points. As a result, for simple geometric shapes, the method can produce averages that conict with the common sense solution (e.g. Figure 1). Yamada et al [9] have automatically extracted 3D landmark points from range data for analysis of cleft-lip and palate infants. Their method uses only 18 of the approx. 30,000 captured data points for craniofacial analysis, and 3

so does not study the curving surface between the landmarks. Several methods have been proposed for the more complex case of fully threedimensional volumetric data. These include interactive or automatic adaptation of a manually dened surface template [10] [11] [12]. These methods have been designed for internal anatomical surfaces and so do not include non-rigid registration of surface intensity features. They are also rather complex for the simpler case of a single surface as captured by the laser range scanner. Automatic volumetric methods include iterative matching of crest-lines [13] and physics-based matching of intensities [14] [15] [16] [17] [18] [19]. O'Toole and Vetter et al [20] have used optical ow methods for the automatic construction of average facial images from range data for studies in facial perception. Preliminary testing of iterative edge and curvature point matching and automatic uid surface matching for range data proved less robust (as is required for medical applications) than the interactive method proposed here [21]. In this paper we exploit the two-dimensional nature of the depth data, using a combination of the two-dimensional warping methods and the surface depth averaging method for the construction of average facial data. The previous methods are also extended to higher order statistical analysis of the facial surface using a 3D Gaussian model for the distribution of the positions of all of the approx. 30,000 facial points in the scanned data. This allows the probabilistic analysis of individual subjects and signicance testing between averages, which have application in diagnosis, treatment planning and clinical research. 4

3 Design Considerations The Interface project's laser scanner uses a linear motion to scan the face at a resolution of approximately 1mm per pixel, capturing 65000 data points, approximately half of which are on the face [3]. The aim of the work reported here is to develop new computer methods and programs to incorporate this standardised laser-range data into the broader framework of morphanalysis [1] [2], and to perform preliminary testing of the system on normal subjects. Until now morphanalysis has used standardised orthogonal 2D images to construct average and standard deviation craniofacial outlines of patient groups. This has allowed the quantitative analytical comparison of individual patients and groups, improving diagnosis and hence treatment planning.

4 System Description A three stage approach is used to match points on dierent subject's facial images and to allow the construction of probabilistic facial averages. The three stages are rigid-body registration, feature extraction and image warping. Following these stages, matching points are labelled on dierent facial images allowing the use of standard multivariate statistical methods across the facial surface. Rigid-body registration is achieved using a method called the xed relations principle so that all the scanned data have a common reference frame. Interactive extraction of points and contours is performed using a template of connected 5

snakes [22]. Image warping using multi-level free-form deformations extends the feature matching to the entire surface. The mean and normal distribution of each point on the facial surface can then be found. Visualisation of the spatial probability of individual scans is performed by converting the spatial probabilities across the surface into a coloured texturemap. Signicance testing between two averages is performed with Hotelling's

T 2test and the results are again displayed by converting the probabilities into a coloured texture-map.

4.1 Registration Rigid-body registration of the three-dimensional data is necessary to ensure that the spatial coordinates of facial features have a consistent and meaningful relationship. We use a registration technique called the xed relations principle [1] to accurately register patients to an orthogonal cartesian reference frame dened in the analytic morphograph. The accuracy of this registration method has been previously tested to within 0.2mm. The analytic morphograph contains a pressure sensitive cephalostat, with the axis through the centre of its ear rods dening the y-axis. These rods are placed in the subject's ears until the pressures are equalised to standard values. At the pointed tip of each of the cephalostat's ear rods is a piece of lead-shot, and the midpoint between these two points denes the origin. This eliminates the 6

inaccuracies in traditional cephalostatic systems due to the rotation of the porions about the cephalostat's y-axis and the variability and deformity of the external acoustic meatus. The third registration point is a lead-shot marker placed on the skin over the subject's left orbitale, located by a clinician by palpation. This reduces the error incurred in trying to accurately locate the orbitale on images. The subject is placed in the cephalostat and then asked to rotate their head until the orbitale marker lies in the xy-plane. This plane is visible as a horizontal line in the morphograph's window superimposed on its reection in a mirror opposite (Figure 2). The use of a physical xation method prior to scanning ensures that two-dimensional images (such as X-ray images) can be quantitatively related to the three-dimensional laser-video data.

4.2 Data Capture and Preprocessing In addition to the laser-range data a photograph of each subject was also captured on Polaroid instant (chemical) lm in the usual manner for morphanalytic photographs [1] [2] and digitised. The photographic images contain important information for use in the feature extraction stage. Because the images can become rotated slightly in the digitisation process, they were manually aligned by locating two marks in the images, one on each of the cephalostat's ear rods. A nal preparation stage was used to remove the perspective distortion from the photographic images. For each undistorted pixel (y; z) the true depth x is 7

known from the depth data. The coordinates (Y; Z ) of this pixel in the photographic image are given by

f D y; x

(1)

z Z = fD; x

(2)

Y

=

for focal length f = 18:59mmand distance D = 169mm from the (y; z) plane to the camera, using a pinhole model.

4.3 Feature Extraction For the biologically meaningful comparison of subjects it is essential that anatomically identiable features are matched correctly. The simplest method of labelling features is to manually locate landmarks. Manual location of landmarks is open to user error and cannot be used for higher order features such as contours. Several interactive methods for extracting contours have been proposed in the literature, including Baysian methods, graph-searching and energy-minimisation. The most popular of these are active contours or `Snakes' [22]. Snakes have been used to assist in the extraction of features for warping two-dimensional images [23]. Snakes are energy minimizing splines that are attracted towards strong image features while their bending energy and tension are minimised. The contour is modelled as a set of nodes connected by sti springs. An initial approximation of the contour is supplied by the user and iterated under external and internal forces until a minimum is found. The external image produced forces are given 8

by the gradient of an edge potential function P (x; y),

fi = prP (x; y) + fiu

(3)

Here p is a user dened value that controls the strength of the image forces and fiu are optional user forces that allow the user to interact with the contour. The internal contour forces are determined by the tension and the stiness. The extension ei at node i depends on the separation of the nodes, i.e.

ei = jxi+1 ; xij ; li

(4)

where li is the spring's natural length. This extension produces a tension force

i, i = ajri eji rbi i

(5)

The relative inuence of the tension force is controlled by the variables ai . The stiness or bending energy along the contour can be approximated by a ve point nite dierence method,

i = bi+1 (xi+2 ; 2xi+1 + xi) ; 2bi (xi+1 ; 2xi + xi;1) + bi;1 (xi ; 2xi;1 + xi;2 ) (6) 9

where bi is the rigidity variable. The entire system is governed by the rst order dynamic system

ddtxi + i + i = fi

(7)

where is a velocity proportional damping coecient. This can be integrated through time using a numerical method such as the semi-implicit Euler method or a nite element approach. Using a forward nite dierence approximation for the derivative,

dxi  xti+4t ; xti dt 4t

(8)

and evaluating the linear terms in xi at time t + 4t and the nonlinear terms at time t leads to a pentadiagonal system of equations,

xt+4t + t+4t = xt ; t + f t i 4t i 4t i i i

(9)

for the new node positions xti+4t in terms of the old node positions xti . The left hand side of this equation can be represented by a matrix that is time independent. Therefore this matrix can be inverted once at the start and used to eciently multiply the new right hand side at each time step. This makes snakes a fast and exible method for contour extraction. A template of cubic spline contours and landmark points is constructed man10

ually and then adapted to each subject interactively using snakes. The template used in this study (Figure 3) consisted of 32 landmark points and 32 snakes. Approximate adaptation of the template to each subject is performed manually by adjusting the contour control points. Each contour is either xed or attracted to one of four image functions: the intensity image (g(x; y)) gradient magnitude,

jrg(x; y)j ;

(10)

the depth image (f (x; y)) gradient magnitude,

jrf (x; y)j ;

(11)

the positive depth curvature (Figure 4)) ,

p

kmax = H + H 2 ; K;

(12)

or negative depth curvature,

p

kmin = H ; H 2 ; K; where



H=

1+

(13)



fy2 fxx + (1 + fx2) fyy ; 2fxfy fxy  3=2 2 1 + fx2 + fy2 11

(14)

and

f f ; f2 K =  xx yy2 xy2 ; 1 + fx + fy

(15)

where subscripts denote partial derivatives. The energy function for each contour is chosen by the user when the template is created. The junction of two image contours represents a landmark. The translations due to all the snakes connected to a landmark are averaged after each iteration of the template snakes. In this way the adaptation of the contours improves the location of the landmarks.

4.4 Interpolation of feature correspondences Adaptation of the contour template described above labels only a fraction of the available surface data. In order to extend the labelling across the surface, interpolation is used. The translation vector from each point in one template to the corresponding point in another template are interpolated smoothly using multilevel free-form deformations (MFFDs) [23][24]. These are multi-level B-spline surfaces that are constructed to satisfy the border constraints (i.e. the template translations) at each scale along a dyadic sequence in a least-squares sense. They are far more ecient to construct than either analytically or numerically solved thin-plate spline warps for the large number of points contained in the feature templates. MFFD's are a multiscale extension of free-form deformations (FFDs) [25] [26], which have proved popular for modelling surface shape in three dimensions. In 12

two dimensionsan FFD is given by two cubic B-spline surfaces constructed over a lattice of control points dened by

f (x; y) =

+3 i+3 jX X k=i l=j

Bk (x) Bl (y) kl

(16)

where i = bx=gridsizec, j = by=gridsizec, kl is the value of the control lattice at point (k; l) and Bk (x) and Bl (y) are the kth and lth B-spline basis functions evaluated at x and y respectively. Each lattice control point will inuence the resulting function in a square area 4 times the grid size surrounding the control point. If the surface given above is required to satisfy a single border constraint zi within this square, an innite number of possible choices for the control lattice values, kl , exist. The choice,

kl (zi ) = Pi+3wPkljz+3i a=i

2

b=j wab

(17)

where wab = Ba (x)Bb (y), will minimise the control lattice displacements in the least squares sense [25]. In general there will be more than one border constraint within each area of four-by-four grid-squares. It may not be possible to satisfy all of the constraints exactly, but the error can be minimised in the least squares sense using lattice values kl given by [24]

13

kl =

P 2 kl (zi) iw Pkli 2 i wkli

(18)

where wkli = Bk (xi )Bl (yi). For a small grid size the standard B-spline interpolation given above will satisfy the border constraints accurately, but will show sharp local variations. Alternatively a large grid size will have smoother local variations but may not accurately satisfy the border conditions. In order to overcome this drawback, multilevel B-spline interpolation has been proposed. The function is rst interpolated using a large grid, then the grid is reduced (usually by a factor of two) and any remaining error in the control points is interpolated and added to the result. The process is repeated until the lattice is of size 1or the border conditions are satised.

4.5 Surface Statistics The two stages described above provide a method for relating anatomical points in two or more subjects across the depth-map surface. In order to construct an average surface from a sample of subjects, the average template is rst created by averaging all related two-dimensional points in the sample's templates. This denes the two-dimensional average for the sample to which each subject's images (photograph and laser-scan) are warped. The average colour and depth can then be found at each 2D point in the average. This denes the average shape and 14

surface colour for the sample. To extend the statistical analysis to probabilities, the covariance matrix is found at each point on the average surface [27]. Each point on the average surface corresponds to a 3D point in each of the subjects given by the 2D warping function (y and z directions) and the depth (x direction). The covariance matrix is then given by

C

2 6 cxx 6 6 6 =6 cyx 6 6 6 4 czx

where

cij = N 1; 1

3 cxy cxz 77 7 7 cyy cyz 777 7 5 czy czz

N  X

(19)





in ; i jn ; j :

n=0

(20)

The probability of lying at a point r is given by

p (r) = q



1

 3 jCj

exp

(2 )



; 12 (r ; r)t C;1 (r ; r)

(21)

The probability of a point r lying within a particular ellipsoid of Mahalanobis distance R is given by the integral

p (R) = p

2 2



ZR

0

r2exp

15

!

2 ; r2 dr

(22)

where R is dened by

R2 = (r ; r)t C;1 (r ; r)

(23)

These probabilities are converted to colours via a look-up table and texturemapped onto the surface. Statistical signicance tests are used to calculate the probability that two samples are drawn from the same population. The standard multivariate test is Hotelling's T 2 test. Given two samples of n1 and n2 points, with means r1 and

r2 and covariance matrices C1 and C2, the pooled estimate for the covariance matrix is

C = (n1 ; 1)n C+1 +n (;n22; 1) C2 1

2

(24)

and Hotelling's T 2 statistic is dened as

T 2 = nn1+n2n (r1 ; r2 )t C;1 (r1 ; r2) : 1

2

(25)

A signicantly large value of T 2 indicates that the mean vectors are dierent for the two populations. The signicance or lack of signicance of T 2 is determined by using the fact that in the null hypothesis case of equal population means, the transformed statistic

F = 3 (nn1 ++nn2 ;;42) T 2 1

2

16

(26)

follows an F distribution with 3 and n1 + n2 ; 4 degrees of freedom. Again, these probabilities can be converted to a coloured texture map and pasted on to the average surface.

5 Status Report To test the system the facial images of thirty male and thirty female adult Caucasian volunteer subjects from the sta at Stoke Mandeville Hospital, Aylesbury, were registered to the morphograph's reference frame, laser-scanned and photographed. The average male is shown in Figure 5 and the average female in Figure 6. Signicance testing between these two averages is shown in Figure 7. The results indicate that, as expected, the male and female averages are signicantly dierent across the facial surface. Comparison of a male subject with the male average is shown in Figure 8 and comparison of a female subject with the female average is shown in Figure 9. In order to test the system with abnormal facial proles, the facial scan of one of the normal subjects in our database was modied using a surgical simulator. The simulator uses a simplied version of the nite element models for the skin described in [28][29]. In this system only a single-layer soft-tissue model is used (as opposed to a three-layer model), which is attached to the hard tissues extracted from the CT scan of a dry skull. Modifying the positions of the hard tissue fragments in the CT data causes a change in the soft tissue prole and a 17

corresponding variation in the facial surface probabilities. Figure 10 shows the segmented CT data, the texture-mapped surface and the initial surface probability map. Advancing the jaw by 10mm as shown in Figure 11, decreases the facial probabilities far less than advancing the maxillae by 10mm, as shown in Figure 12. Visualisation of this kind could allow a surgeon to locate the optimal positions of the facial hard tissues prior to surgery. Three-dimensional colour models of the results given above can be found at http://psych.st-and.ac.uk:8080/people/personal/bpt/paperFigs.html.

6 Lessons Learned The aim of this work was to test the feasibility of extending the clinically established methods of morphanalysis to 3D laser-range data using modern computer graphic methods. The results show that it is indeed possible to do so and that these extensions enhance the existing methods. For example, the combination of image warping to relate anatomical features together with the standardisation inherent in the xed relations principal allows quantitative comparison of all the scanned points on a subject's face (30,000). This allows higher order statistical analysis of the points, such as the probabilistic analysis of an individual against the mean, or the comparison of two averages using signicance testing.

18

7 Future Plans The methods described in this paper have a wide range of potential applications in craniofacial assessment, diagnosis, treatment planning and research. The immediate goal is to test the system on clinical patients, for example by constructing average facial images of cleft-palate patients at dierent ages. Dierent treatments for the same craniofacial condition could be compared using signicance testing on averages derived from the two treatment groups. For example, showing that two treatments do not have signicantly dierent outcomes would provided strong evidence for the adoption of the simpler or safer method. There are also many possibilities for the enhancement of the computational methods. Further automation of the feature correspondence problem, for example using optical ow [20], PCA [30] or other methods [9], would decrease the burden on the user and may lead to increased accuracy. The potential of combining surface statistical modelling with surgical simulators also needs to be further explored, for example by attempting to calculate an optimal pre-surgical plan for a particular patient.

Acknowledgments The authors wish to thank Sir James Savile who funded this work via the Jimmy Savile Stoke Mandeville Hospital Trust and the sta at Stoke Mandeville Hospital, Aylesbury, Bucks. who participated as volunteers. 19

References [1] G.P. Rabey, `Current principles of morphanalysis and their implications in oral surgical practice,' British Journal of Oral Surgery, Vol. 15, pp. 97-109, 1977-78. [2] G.P. Rabey `Current and Future Practice in Craniofacial Morphanalysis,' in Surgical Correction of Dentofacial Deformities: New Concepts, Ed. William H.

Bell, W.B. Saunders Co., pp715-731, 1985. [3] B. Tiddeman, N. Duy, G. Rabey and J. Lokier, `Laser-video scanner calibration without the use of a frame store,' IEE Proceedings Vision Image and Signal Processing, Vol. 145, Number 4, 1998.

[4] P.J. Benson and D.I. Perrett `Synthesizing Continuous Tone Caricatures,' Image and Visual Computing, Vol. 9, pp123-129, 1991.

[5] P.J. Benson and D.I. Perrett, `Extracting prototypical facial images from exemplars,' Perception, Vol. 22, pp 257-262, 1993. [6] D.I. Perrett, K.A. May and S. Yoshikawa, `Facial shape and judgements of female attractiveness,' Nature, Vol. 368, pp. 170-178, 1994.

20

[7] F.L. Bookstein, `Principal warps: Thin-plate splines and the decomposition of deformations,' IEEE Trans. on PAMI, Vol. 11, No. 6, pp. 567-585, June 1989. [8] A.M. McCance, J.P. Moss, W.R. Fright, A.D. Linney and D.R. James, `Threedimensional analysis techniques,' (4 papers) The Cleft Palate - Craniofacial Journal, Vol. 34, No. 1, pp 36-62, January 1997.

[9] T. Yamada, T. Sugahara, Y. Mori, K. Minami and M. Sakuda, Development of a 3-D measurement and evaluation system for facial forms with a liquid crystal range nder, Computer Methods and Programs in Biomedicine, Vol. 58, pp 159-173, 1999. [10] C.B. Cutting, F.L. Bookstein, B. Haddad, D. Dean and D. Kim, `A splinebased approach for averaging three-dimensional curves and surfaces,' SPIE Mathematical Methods in Medical Imaging, Vol. 2035, pp29-44, 1993.

[11] P. Thompson and A.W. Toga, `Visualization and mapping of anatomical abnormalities using a probabilistic brain atlas based on random uid transformations,' Proc. Visualization in Biomedical Computing (VBC '96), pp 384-392, Springer, 1996. [12] P.M. Thompson and A.W. Toga, `A Surface-Based Technique for warping Three-Dimensional Images of the Brain,' IEEE Transactions on Medical Imaging, Vol. 15, No. 4, August 1996.

21

[13] G. Subsol, J-P. Thirion and N. Ayache, `A General Scheme for Automatically building 3D Morphometric Anatomical Atlas: Application to a Skull Atlas,' INRIA research report 2586, ftp.INRIA.fr/INRIA/tech-reports/RR/RR2586.ps.gz, 1995. [14] G.E. Christensen, R.D. Rabbitt and M.I. Miller, `3D Brain Mapping Using a Deformable Neuroanatomy,' Physics in Medicine and Biology, Vol. 39, pp 609-618, 1994. [15] R. Bajcsy and S. Kovacic, `Multiresolution Elastic Matching,' Computer vision, Graphics and Image Processing, Vol. 46, pp1-21, 1989.

[16] M.I. Miller, G.E. Christensen, Y. Amit and U. Grenander, `Mathematical textbook of deformable neuroanatomies,' Proc. National Academy of Sciences, 90(24) pp11944-48, December 1993. [17] M. Bro-Nielson and C. Gramkow, `Fast Fluid Registration of Medical Images', Proc. 4th Int. Conf. on Visualization in Biomedical Computing (VBC'96), pp267-276, 1996.

[18] J-P. Thirion, `Non-rigid matching using demons,' Proc. Int. Conf. Computer Vision and Pattern Recognition (CVPR'96), San Francisco, June 1996.

[19] T. Schormann, S. Henn and K. Zilles, `A new approach to fast elastic alignment with application to human brains,' Proc. 4th Int. Conf. on Visualization in Biomedical Computing (VBC'96), pp337-342, 1996.

22

[20] A.J. O'Toole, T. Vetter, H. Volz and E.M. Salter, Three dimensional caricatures of human heads: Distinctiveness and the perception of facial age, Perception, Vol. 26, pp 719-732, 1997.

[21] B.P. Tiddeman, Ph.D. Thesis, Heriot-Watt University, 1998. [22] M. Kass, A. Witkin and D. Terzopoulos, `Snakes: Active contour models,' Int. J. Computer Vision, Vol. 1, No. 4, pp. 321-331, 1988.

[23] S-Y. Lee, K-Y. Chwa, S.Y. Shin and G. Wolberg, `Image metamorphosis using snakes and free-form deformations,' Computer Graphics (SIGGRAPH '95 Proceedings), pp 439-448, 1995.

[24] S.-Y. Lee, G. Wolberg and S.Y. Shin, `Scattered data interpolation with multilevel B-splines,' IEEE Transactions on Visualization and Computer Graphics, Vol. 3, No. 3, 00 229-244, 1997. [25] W.M. Hsu, J.F. Hughe and H. Kaufman, `Direct manipulation of free-form deformations,' Computer Graphics (Proceedings SIGGRAPH '92), Vol. 26, No. 2, pp. 177-184, 1992. [26] W. Welch and A. Witkin, `Variational surface modelling,' Computer Graphics (Proceedings SIGGRAPH '86), Vol. 26, No. 2, pp. 157-166, 1992.

[27] B.F.J. Manly, Multivariate statistical methods: A primer, 2nd Edition, Chapman and Hall, 1994. 23

[28] Koch R.M., Gross M.H., Carls F.R., von Buren D.F., Frankhauser G. and Parish Y.I.H., `Simulating facial surgery using nite element models,' Computer Graphics (SIGGRAPH '96 Proceedings), pp 421-428, 1996.

[29] Keeve E., Girod S. and Girod B., `Craniofacial surgery simulation,' Proc. Visualization in Biomedical Computing (VBC '96), pp 541-546, Springer, 1996.

[30] A. Lanitis, C.J. Taylor and T.F. Cootes, `Automatic interpretation and coding of face images using exible models,' IEEE Pattern Analysis and Machine Intelligence, Vol. 19(7):743-755, July 1997.

24

Figures

Figure 1: Depth-only averaging does not produce the `common sense' shape average in many simple examples. Above the average of two cylinders A and B of equal radius but dierent height should be C but the cylindrical depth-only average produces D.

Figure 2: The xed relations principle can be applied to any form with three identiable points. For the human head it is implemented using a cephalostat and an orbitale marker.

25

Figure 3: The feature template of connected contours used in this paper.

Figure 4: The positive curvature (left) and negative curvature (right) of a facial depthmap.

26

Figure 5: The average male rendered with texture mapping.

Figure 6: The female average rendered with texture mapping.

27

Figure 7: The signicance of the dierence between the male and female averages displayed as a texture map. Key: ; log jProbabilityj 0.0 1.0 2.0 3.0 4.0

Figure 8: Comparison of a normal male subject with the male average. Key: Mahalanobis distance 0.0 1.0 2.0 3.0 4.0 5.0 0.0 0.683 0.954 0.997 Probability 28

Figure 9: Comparison of a normal female subject with the female average. Key: Mahalanobis distance 0.0 1.0 2.0 3.0 4.0 5.0 0.0 0.683 0.954 0.997 Probability

Figure 10: A surgical simulator is set up by connecting a registered CT data set (left) and a surface data set (centre). The facial surface probabilities before moving the hard tissues are also shown (right). Key: Mahalanobis distance 0.0 1.0 2.0 3.0 4.0 5.0 0.0 0.683 0.954 0.997 Probability 29

Figure 11: Advancment of the jaw of the CT data by 10mm (left) and the corresponding changes in the texture mapped surface (centre) and the surface probabilities (right). Key: Mahalanobis distance 0.0 1.0 2.0 3.0 4.0 5.0 0.0 0.683 0.954 0.997 Probability

Figure 12: Advancment of the maxillae of the CT data by 10mm (left) and the corresponding changes in the texture-mapped surface (centre) and the surface probabilities (right). Key: Mahalanobis distance 0.0 1.0 2.0 3.0 4.0 5.0 0.0 0.683 0.954 0.997 Probability 30