Research Precision Engineering—Article

A Dual-Platform Laser Scanner for 3D Reconstruction of Dental Pieces

  • Shuming Yang ,
  • Xinyu Shi ,
  • Guofeng Zhang ,
  • Changshuo Lv
Expand
  • State Key Laboratory for Manufacturing System Engineering, Xi’an Jiaotong University, Xi’an 710049, China

Received date: 02 May 2018

Revised date: 29 Sep 2018

Accepted date: 29 Oct 2018

Published date: 19 Dec 2018

Copyright

2018 THE AUTHORS

Abstract

Abstract

This paper presents a dual-platform scanner for dental reconstruction based on a three-dimensional (3D) laser-scanning method. The scanner combines translation and rotation platforms to perform a holistic scanning. A hybrid calibration method for laser scanning is proposed to improve convenience and precision. This method includes an integrative method for data collection and a hybrid algorithm for data processing. The integrative method conveniently collects a substantial number of calibrating points with a stepped gauge and a pattern for both the translation and rotation scans. The hybrid algorithm, which consists of a basic model and a compensation network, achieves strong stability with a small degree of errors. The experiments verified the hybrid calibration method and the scanner application for the measurement of dental pieces. Two typical dental pieces were measured, and the experimental results demonstrated the validity of the measurement that was performed using the dual-platform scanner. This method is effective for the 3D reconstruction of dental pieces, as well as that of objects with irregular shapes in engineering fields.

Cite this article

Shuming Yang , Xinyu Shi , Guofeng Zhang , Changshuo Lv . A Dual-Platform Laser Scanner for 3D Reconstruction of Dental Pieces[J]. Engineering, 2018 , 4(6) : 796 -805 . DOI: 10.1016/j.eng.2018.10.005

1. Introduction

Dental cavity preparation is a basic clinical operation skill in oral medicine. The cavity, which is used to contain filler material in order to restore the shape and function of the tooth, is formed by removing caries lesion with dental surgery. Rigorous criteria for the cavity in terms of depth, length, width, and angle make its assessment an important work in clinical teaching. Digital assessment that uses computer-assisted three-dimensional (3D) reconstruction has now become an important means for dental teaching; however, the digital assessment systems that are mainly used are expensive and still need to be improved in terms of blind area and precision [1,2]. 3D laser scanning has the advantages of high precision, fast speed, and easy implementation [36]. This method projects the laser onto the object and collects images of the object with the laser stripe, thus actively forming triangle similarity relationships between the images and objects. Calibration is critical for laser scanning, as it determines the validity and precision of the measurement results.
There are two main problems in calibration. The first is the question of how to collect a substantial number of accurate calibrating points using appropriate methods. The wire-drawing method [7] and dentiform bar method [8] that were proposed early depend on expensive external equipment and gain few calibrating points. Although Huynh et al. [9] achieved high-precision calibrating points based on the invariance of the cross-ratio, these calibrating points are sometimes insufficient. At present, a planar target is widely used to collect the calibrating points due to its simple fabrication and flexible operation [1012]. For a rotation scan, the rotation axis must also be calibrated [1315]. Most of the abovementioned methods involve complicated artificial operations. Convenient calibration is becoming important, because recalibration must be done frequently in order to eliminate errors caused by movements or environmental changes.
The second problem is how to calculate the parameters using an appropriate algorithm. Calibration algorithms can be divided into two types: the mathematical method and the machine-learning method. The mathematical method establishes mathematical formulas according to the principle of 3D laser scanning. Due to imaging restorations, structure errors, and other uncertainties, complete and precise mathematical formulas usually turn out to be very complex. The machine-learning method builds transformation relations between image coordinates and spatial coordinates directly using artificial neural networks (ANNs) and genetic algorithms [16,17]. As a black box algorithm, the machine-learning method does not require camera calibration and mathematical formulas; however, it has disadvantages such as low convergence and poor generalization. This paper presents a dual-platform scanner for dental reconstruction and assessment, and proposes a method of hybrid calibration for laser scanning to improve the convenience and precision.

2. Methodologies

2.1. Laser scanning

In 3D laser scanning, when the line of the laser is projected onto the object being measured, an image of the part of the object with the light stripe is acquired by the camera, as shown in Fig. 1. If P is a point on the object being measured, its image P appears to be on the light stripe in the image plane when it is scanned by the laser. The world coordinates frame owxwywzw describes the 3D information of the object. The camera coordinates frame ocxcyczc and the image coordinates frame o0xy are established with an origin of oc and o0, where oc is the optical center and o0 is the intersection of the optical axis and image plane. The distance between oc and o0 is f, which is also called the focal length. The pixel array on a complementary metal-oxide semiconductor (CMOS) camera is expressed by ouv. As the position of P in the ouv can be found through image processing, the principle is to derive the transformation from ouv to owxwywzw.
Fig. 1 Principle of 3D laser scanning.

Full size|PPT slide

If the subscript of o0 in the pixels array is (u0,v0), then the transformation from ouv to o0xy is
x=sx(u-u0)y=sy(v-v0)
where sx and sy, as given by the camera manufacturer, are the physical dimensions of a pixel on the CMOS camera in the corresponding direction.
In the ideal pinhole imaging model of the camera, the proportional relationship between Pxc,yc,zc and Px,y is
xcx=ycy=zcf
The transformation from o0xy to ocxcyczc is based on this proportional relationship, which is usually expressed in the form of a homogeneous matrix, as follows:
zcxy1=f000f0001xcyczc
Meanwhile, a rigid body transformation occurs between owxwywzw and ocxcyczc in 3D space. Supposing that R is a 3 × 3 rotation matrix and T is a translation vector, the transformation is
xcyczc=RTxwywzw1=r1r2r3txr4r5r6tyr7r8r9tzxwywzw1
Thus, we can derive the transformation from ouv to owxwywzw from Eqs. (1), (3), and (4).
zcuv1=f/sx0u00f/syv0001RTxwywzw1=M1M2xwywzw1
Eq. (5) is the camera model, where M1 is the intrinsic parameter matrix and M2 is the extrinsic parameter matrix. The model can also be written as a projection matrix M, as shown in Eq. (6).
zcuv1=Mxwywzw1=m11m12m13m14m21m22m23m24m31m32m33m34xwywzw1
In 3D laser scanning, the world coordinates axis xw is usually parallel to the scanning direction, so xw can be acquired directly by the scanner. The other 3D information, yw,zw, is worked out by eliminating zc in Eq. (6):
yw=m23m34-m24m33u+m14m33-m13m34v+m13m24-m14m23m22m33-m23m32u+m13m32-m12m33v+m12m23-m13m22zw=m24m32-m22m34u+m12m34-m14m32v+m14m22-m12m24m22m33-m23m32u+m13m32-m12m33v+m12m23-m13m22
Eq. (7) is the basic model of 3D laser scanning. This is an ideal model under ideal conditions. However, a variety of nonlinear distortions occur in practical application. The main distortions that affect the imaging results are radial distortion and tangential distortion [18]. Radial distortion and tangential distortion come from the shape of the lens and the assembly of the camera, respectively. Their distortion models are
x=xd1+k1r2+k2r4+k3r6y=yd1+k1r2+k2r4+k3r6
x=xd+2p1yd+p2r2+2xd2y=yd+2p2xd+p1r2+2yd2
where xd and yd are real imaging positions; x and y are ideal imaging positions; r2=xd2+yd2; k1, k2, and k3 are radial distortion coefficients; p1 and p2 are tangential distortion coefficients. In this case, f, u0, v0, R, T, k1, k2, k3, p1, and p2 need to be determined through calibration before the scanner begins the measurement.

2.2. Calibration methods

As stated above, calibration establishes the transformation relationships between the pixel array u,v and the world coordinates yw,zw. In general, there are two kinds of method: the mathematical method and the machine-learning method.
The mathematical method establishes mathematical formulas based on the calibration principle first, and then works out the unknown parameters of these formulas through nonlinear optimization. The Tsai’s two-step method [19] and the Zhang’s method [20] are the most widely used forms of the mathematical method. The Tsai’s two-step method uses a 3D calibration target, while the Zhang’s method uses a planar calibration target. In the Zhang’s method, several planes in different positions are used to calculate the parameters, because points on each plane can be used to set up two equations. Zhang’s calculated the initial values of the parameters under the assumption of no distortion, and then worked out distortion coefficients with these initial values by the least squares method. Precision is optimized by maximum likelihood estimation.
In the Tsai’s two-step method, since only quadratic radial distortion is considered, the radial arrangement constraint is applied:
xdyd=xy=r1xw+r2yw+r3zw+txr4xw+r5yw+r6zw+ty
Because u0,v0 can be determined through the optical method, xd,yd are known data. The intermediate parameters, r1/ty, r2/ty, r3/ty, tx/ty, r4/ty, r5/ty, and r6/ty, can be worked out from Eq. (10), if there are more than seven calibrating points. First, the extrinsic parameters, R, tx, and ty, are calculated based on the orthogonality of the rotation matrix. The other parameters, f, k1, and tz, are approached based on the camera and distortion model by nonlinear optimization.
The machine-learning method establishes the transformation relationship between the input u,v and output yw,zw directly by training sample data. In essence, this is a black box method that requires no intrinsic or extrinsic parameters. ANNs are typical machine-learning algorithms. For example, the back-propagation network (BPN, which is a kind of ANN) has been shown to be an effective method for building nonlinear mapping relationships with high versatility and precision [21,22]. With the steepest descent method, the BPN can adjust its weights and thresholds to learn the mapping relationship according to the back-propagation errors. As shown in Fig. 2, its structure consists of an input layer, hide layer, and output layer. Each layer has several nodes that are similar to biological nerve cells.
Fig. 2 Structure of a three-layer BPN.

Full size|PPT slide

The learning process of the BPN has two directions. The forward propagation of data realizes an estimated mapping relationship from n dimensions to m dimensions, while the back propagation of errors helps to revise this mapping relationship. In forward propagation, the input data flow to the hide layer and then to the output layer. For a nude hk in the hide layer, the value is determined by the threshold ak, the related input data xi, and the corresponding weights vki:
hk=f1i=0nvkixi=f1XVk=f1Sk
where Vk=akvk1vkivknT, X=1x1xixn, f1 is the activation function, and Sk is the node’s net input. Similarly, for a nude yj in the output layer, the value is determined by the threshold bj, the related hk, and the corresponding weights wjk.
yj=f2j=0mwjkhk=f2HWj=f2Sj
where Wj=bjwj1wjkwjqT, H=1h1hihq, f2 is the activation function, and Sj is the node net input. In back propagation, the errors between the desired output and the actual output are used to adjust the weights and thresholds in order to minimize the global error function. If the size of the learning samples is P, then the global error function is:
E=p=1PEp=12p=1Pj=1mtjp-yjp2
where Ep is the error of the pth sample and tjp is the desired output. Changes in the weights and thresholds are calculated based on the partial differential of Ep with a learning rate of η, which in the output layer and the hide layer is:
Δwjk=-ηEvjk=p=1P-ηEpwjkΔvki=-ηEvki=p=1P-ηEpvki
Eq. (14) can be deduced into a specific form through the chain rule:
Δwjk=p=1Pj=1mηtjp-yjpf2SjhkΔvki=p=1Pj=1mηtjp-yjpf2Sjwjkf1Skxi
The network structure gives the BPN a strong nonlinear mapping ability. To apply the BPN, u,v and yw,zw are seen as input and output data. The calibration can be completed by a learning process based on Eqs. (11), (12), and (15).
The mathematical method is specific and robust, but it is difficult to work out the mass of the parameters. The mathematical formulas usually only concern the main distortions to be kept simple, which makes the formulas unable to handle other nonlinear factors and uncertainties. The machine-learning method can deal with all the imaging restorations, structure errors, and other uncertainties. Thus, it seems to be quite appropriate for the calibration of laser scanning. However, the expected results cannot be achieved in practice. In addition, this method is apt to plunge into the local minimum, in what is known as over fitting or poor generalization. Poor generalization means that the network performs rather worse with testing samples than with training samples. In this case, only the calibrating points can be measured. In general, the distortions and errors are two orders of magnitude smaller than the ideal values determined by the basic model. Both the mathematical and machine-learning methods process the data directly, which causes the distortions and errors to be concealed by the ideal values. This is an important influencing factor on precision that has been ignored. It can be revealed by normalization, which is a common step in data processing:
Yi=yi-yminymax-ymin=1+ei/fi-fmin/fi+emin/fifmax/fi+emin/fi-fmin/fi+emin/fifi-fminfmax-fmin
where y is the sample data, which consists of the ideal value f and the residual part e that contains distortions and errors. In Eq. (16), both sides are divided by fi. Since e/f is close to zero, the final normalization value is close to that of the ideal model.

3. The dual-platform laser scanner

In this paper, a dual-platform laser scanner based on the laser-scanning principle is designed for the 3D reconstruction of dental pieces. In dental cavity preparation, the cavity may be on the front or lateral of a tooth. Dental pieces with a cavity on the lateral can only be scanned by means of a rotation platform, whereas those with a cavity on the front are suitable for a translation platform. The dual-platform structure makes it possible to scan all types of dental model with the 3D laser scanner system. As shown in Fig. 3, the scanner consists of two cameras, a laser transmitter, and two platforms. The rotation platform is fixed to the translation platform, so it is possible to switch over the working platform through the translation platform. When in rotation scan mode, the rotation center Or is moved in the laser plane according to the mark on the rotation platform. The rotation platform maintains a dip, β, with the horizontal xwowyw plane in order to ensure that the object can be scanned entirely. Two cameras are used in the system to collect images from different sides; this effectively eliminates blind areas and ensures the integrity of the point cloud.
Fig. 3 Design of the dual-platform laser scanner.

Full size|PPT slide

In the translation scan, the direction of the xw axis is set parallel to the movement of the translation platform. This direction is perpendicular to that of the laser plane, which is also the plane of ywowzw. When the system operates, the xw coordinates are obtained from the control module of the translation platform. Simultaneously, yw,zw are calculated based on the calibration results. In the rotation scan, the scanner obtains a set of two-dimensional (2D) physical coordinates yw,zw each time the rotation platform revolves. These physical coordinates must be transformed and assembled into the 3D point cloud xwr,ywr,zwr. There are two steps in this process. First, eliminate the tilt of the rotation platform with the dip β and the rotation center Oryr,zr, as shown in Eq. (17). Second, assemble the physical coordinates together one by one according to the rotated angle θ, as shown in Eq. (18). The rotated angle θ can be obtained from the control module of the rotation platform, whereas β and Oryr,zr need additional calibration.
ywzw=cosβ-sinβ-yrsinβcosβ-zrywzw1
xwrywrzwr=ywzw-sinθcosθ0001
A dual-platform laser scanner was constructed, as shown in Fig. 4(a). Each camera had a resolution of 1280 × 1024 and an effective view field of 20 mm × 18 mm. The physical dimensions, sx and sy, of a pixel were 5.2 μm × 5.2 μm. The calibration target for the translation scan was a stepped gauge with five smooth treads, as shown in Fig. 4(b). Each step had a height of 2 mm, and a length and width of 20 mm × 5 mm. The calibration target for the rotation scan was a pattern, as shown in Fig. 4(c). A white circle with a diameter of 10 mm was positioned in the middle.
Fig. 4 Experimental facilities for the dual-platform laser scanner. (a) The dual-platform laser scanner; (b) the translation platform and gauge; (c) the rotation platform and pattern.

Full size|PPT slide

4. Hybrid calibrations

For the dual-platform scanner, calibration involves finding the coordinate transformation from image coordinates to world coordinates in the translation scan first; next, the dip β and Oryr,zr is found in the rotation scan. We use an integrative method to collect the calibrating points. The integrative method can conveniently collect a substantial number of calibrating points and can perform an integrative calibration for both the translation and rotation scans. Furthermore, we propose a hybrid algorithm to establish an effective model. This hybrid algorithm can achieve higher precision by combining the mathematical and machine-learning methods.

4.1. Integrative method

In the calibration of the translation scan, the stepped gauge is placed on the translation platform. When the stepped gauge moves with the translation platform, the laser projects onto different treads. The images of the treads obtained with the laser can be processed to extract a substantial number of calibrating points. The centers of the light stripe on the image are extracted as image coordinates through the Gaussian fitting method [23], while the world coordinates are gained from the physical dimension. In addition, the transformation of coordinates from image coordinates to world coordinates can be performed through the hybrid algorithm. In the calibration of the rotation scan, the pattern is pasted on the rotation platform. The coordinates of a line on the rotation platform are gained through transformation. The dip β is calculated by the slope of this line. Because the center of rotation is on the laser plane, Or can be determined through two images that are snapped before and after rotating by 180°. As shown in Fig. 5, P1 refers to the endpoints of the light stripe on the pattern. The corresponding points after rotating by 180° are referred to as P1. P1 and P1 are symmetrical to Or, which can be used to calibrate Oryr,zr. In general, it is necessary to collect data and perform the calibration several times; the average result is the final result. Next, P2 and P2 are used to calculate the error in the calibration results after these results are obtained.
Fig. 5 The symmetrical property of the rotation scan. (a) Initial position; (b) rotated 180°.

Full size|PPT slide

4.2. Hybrid algorithm

Based on the discussion on calibrating algorithms, we propose a hybrid algorithm that combines the mathematical method and the machine-learning method. In this hybrid algorithm, the mapping relationship is divided into two parts: the main part and the compensation part. The main part is determined by the basic model of laser scanning, while the compensation part contains all the distortions and errors, as shown in Fig. 6. The final mapping relationship is made up of the basic model and the network. To establish the hybrid model, the mathematical method is used first in order to work out the basic model. Next, taking the residual between the main part and the real value as the output, the machine-learning method is used to establish the network.
Fig. 6 The hybrid algorithm.

Full size|PPT slide

The basic model is provided in Eq. (7); this can be replaced by the following:
yw=a1u+a2v+a3a7u+a8v+a9zw=a4u+a5v+a6a7u+a8v+a9
Its matrix form is
UA=Y
where
A=a1a2a3a4a5a6a7a8a9TY=ywzwTU=uv1000ywuywvyw000uv1zwuzwvzw
In calibration, since there are far more calibrating points than unknowns, a1 – a9 can be worked out through the least squares method:
A=UTU-1UTY
The BPN is used as compensation in order to learn the mapping relationship of the distortion and errors. The input data are the pixels array (u, v), while the output data are the residual Eex,ey between the main part and the real value Yreal.
E=Yreal-UA
The network has three layers, with two nodes in the input layer and two nodes in the output layer. The hybrid algorithm is superior to the mathematical and machine-learning methods, as it combines the advantages of both methods while overcoming their shortcomings. When compared with a pure mathematical method, the hybrid model is more complete than the mathematical formulas, because all the distortions and other errors can be compensated for by the network. When compared with a pure machine-learning method, the hybrid model is more specific and robust, because it is no longer a black box network. The basic model ensures the main part of the mapping relationship and improves the generalization ability, which can limit the generalization error to the residual level of Eex,ey. The hybrid algorithm has higher precision in calibration, because it can diminish the influence of ideal values on the distortions and errors. It divides the mapping relationship into two parts at the beginning, thus avoiding the concealing problem shown in Eq. (16).

5. 3D reconstruction

The result of laser scanning is point-cloud data, which needs to be simplified and reconstructed in order to restore the 3D shape of the object.

5.1. Point-cloud reduction

The initial point cloud contains many redundant points; this increases the amount of computation and reduces the efficiency of reconstruction. Therefore, it is necessary to simplify the point cloud before triangulation. We propose a point-cloud simplification method in order to process point-cloud data according to the morphological characteristics of the point cloud. This method has higher processing efficiency and a better streamlining effect.
For the point cloud that is obtained by translational scanning, the density of the point-cloud distribution is larger in the direction of the light bar and smaller in the scanning direction; therefore, there are many redundant points in the direction of the light bar, as shown in Fig. 7. In order to preserve the feature information of the point clouds, it is necessary to calculate the distance between adjacent points on the stripe. If the distance is greater than the threshold, these points contain more feature information and should be preserved. The rest of the points are sampled randomly according to the density of the point clouds in the scanning direction.
Fig. 7 The translational scanning point cloud.

Full size|PPT slide

For the point cloud that is obtained by rotational scanning, the point-cloud data are radially distributed around the rotational center. The closer it is to the rotational center, the higher the point-cloud density is and the more redundant points there are, as shown in Fig. 8. Therefore, the point-cloud data can be divided into n concentric circle regions around the center of rotation. The radii of the concentric circle are r, 2r, 3r, …, nr and the area of each concentric circle is equal. According to the distribution characteristics of point clouds, the number of points in each ring is proportional to the width of the ring. If the ratio is k, the point-cloud density of the first ring is
Fig. 8 The rotational scanning point cloud.

Full size|PPT slide

ρi=ki+1-irπr2=kπri+1-i
According to the density of the point clouds in each concentric ring, we set a simplified threshold in order to simplify the point cloud. Finally, a complete point cloud with uniform distribution and more feature information is obtained.

5.2. Delaunay triangulation

A triangular mesh occupies less storage space and represents better surface fineness; thus, it has become the main means of realizing 3D display in a computer. In general, there are two ways to triangulate 3D point-cloud data: first, directly triangulating 3D points; and second, projecting 3D points onto the 2D plane, using 2D plane triangulation to create meshes. The former way has a large computation and the algorithm is not stable. 2D planar triangulation has a good theoretical basis and good mathematical characteristics, but it is only suitable for surfaces that are projected in a certain direction without overlapping.
According to the principle of translational scanning, as shown in Fig. 3, the point-cloud data obtained by the translational scanning of line-structured light can actually be regarded as the projection of the object in the xwowyw plane. Therefore, the translational scanning point cloud represents a surface projected onto the xwowyw without overlap, which can be directly projected through two dimensions transformation, using the Watson’s algorithm for triangulation [24]. The Delaunay triangulation process using the Watson’s algorithm is as follows: ① build a super triangle ΔE that contains all the points; ② insert a new point from the point set and connect it to the three vertices of the triangle ΔE in order to form the initial mesh; ③ insert a new point and find the “influence triangle,” which is the triangle containing the point; ④ delete the common edge of the “influence triangle” and connect the new point to the related vertices in order to form a new mesh; ⑤ repeat ③ and ④ until all of the points in the point set are processed.
Because of the characteristics of the revolving body, the rotational scanning point cloud has overlapping problems in any direction. Therefore, point clouds cannot be directly triangulated through 2D projections. Considering the acquisition process of the rotational point cloud, the point-cloud data are obtained by line-structured light projection before tilting and splicing. Therefore, the rotating point cloud can be tilted and expanded according to the scanning position by coordinate transformation. The expanded point cloud has the shape of a linear-structured light projection and can be triangulated by 2D projection. It is necessary to combine the triangular mesh together after the point cloud is triangulated in order to finally obtain the complete subdivision of the rotating point cloud.
The triangulation of the rotational scanning point cloud can be summarized as follows: ① divide the rotational point cloud into four regions, A, B, C, and D, and ensure that A, B, C, and D have overlapping boundary points, as shown in Fig. 9; ② transform the coordinates of each region with the center of rotation and expand the point cloud into the shape before tilting and splicing; ③ at this time, every area forms a surface that does not overlap the xwowyw plane itself; ④ triangulate the point cloud of each region; ⑤ after the 2D projective triangulation of each region is complete, put the triangular mesh together to form the complete mesh of the original point cloud, according to the overlapping boundary points of the four regions.
Fig. 9 Dividing and expanding the original point cloud.

Full size|PPT slide

6. Experiments and results

Experiments using the Tsai’s two-step method, the BPN method, and the hybrid method were conducted in order to demonstrate the validity of the hybrid calibration. Measurement and reconstruction results after calibration were obtained.

6.1. Calibration

6.1.1. Data collection

The points were collected by placing the stepped gauge on the translation platform with its first tread under the laser. As the world coordinates frame was set based on the physical dimensions of the gauge, the remaining steps were performed automatically through programmatic control. This section introduces the experiment based on the left camera. The images of the treads that were collected in the experiment are shown in Fig. 10. A total of 5038 valid samples were collected after image processing; these were used for the calibration of the translation scan by establishing the coordinate transformation. A pair of symmetrical pattern images that were collected in the experiment is shown in Fig. 11. Since the interval angle was 15°, a total of 12 pairs of symmetrical images were used for the calibration of the rotation scan.
Fig. 10 Image acquisition of the target. (a) First, (b) second, (c) third, (d) forth, and (e) fifth tread.

Full size|PPT slide

Fig. 11 Image acquisition of the pattern. (a) θ = 0°; (b) θ = 180°.

Full size|PPT slide

6.1.2. Calibration results

Based on the hybrid algorithm, the basic model was worked out using Eq. (21). The compensation network was a three-layer BPN with five hide nodes, which was trained 100 times with a terminal of 10−4. β and Or can be calculated in each pair, with the average taken as the final value. The parameters of the basic model and the network are provided below: A=-0.0271.038-19.580-1.997-0.0031262.001-0.007044.987
β=44.320
Or5.451,4.397
VT=V1V2V3V4V5=6.118-1.775-0.246-0.5365.5757.2561.074-0.139-0.2956.7841.8010.184-0.4830.5482.104
W=W1W2=3.2751.9062.2093.9042.925-1.265-0.351-0.896-1.0311.3240.4251.085
Calibrations were also conducted with the Tsai’s two-step method and the pure BPN method as contrasting experiments. The calibration results of the Tsai’s method are given below:
fk1u0v0=41.3970.0001637509
β=44.761
Or5.141,4.580
R=-0.859-0.005-0.512-0.0211.0000.0250.5110.319-0.859
T=-0.282-11.896202.904
In the pure BPN method, the network was also three layers with five hide nodes. After being trained 100 times with a terminal of 10−4, its parameters were as follows:
β=43.832
Or5.159,4.032
VT=V1V2V3V4=2.7260.857-0.023-1.0675.037-1.424-1.398-0.064-1.738-0.8072.514-0.0300.197-0.028-7.379
W=W1W2=0.1240.0256-0.2475.097-0.189-0.005-0.0770.0030.788-0.0470.570-0.003

6.1.3. Discussion

The stepped gauge was scanned using different methods, and five equally spaced points were picked up on each tread. In this case, the points on each tread were uniformly distributed along the x axis of the image plane, while the treads were uniformly ordered along the y axis. A reference plane that was 2 mm below the first tread was also scanned; it imaged on the edge of the pixels array and was used to test the generalization ability. The distribution and statistics of the errors are shown in Fig. 12 and Table 1. The performances of the networks in the pure BPN method and the hybrid method are shown in Fig. 13. For the rotation scan, the endpoints on the pattern were picked out in order to calculate the errors in the rotation scan, as shown in Fig. 14 and Table 1.
Fig. 12 Errors in the translation scan.

Full size|PPT slide

Table 1 Error statistics.
MethodMaximum error (mm)Minimum error (mm)Mean error (mm)RMS (mm)
T. scanR. scanT. scanR. scanT. scanR. scanT. scanR. scan
Tsai0.0450.0860.0140.0320.0300.0540.0300.056
BPN0.0550.0770.0040.0340.0230.0580.0270.060
Hybrid0.0270.0460.0050.0150.0150.0290.0160.031
Fig. 13 Performance of (a) pure BPN and (b) hybrid BPN. MSE: mean squared error.

Full size|PPT slide

Fig. 14 Errors in the rotation scan.

Full size|PPT slide

As shown in Fig. 12, the Tsai’s method has regular, steady errors. For the whole gauge, the treads close to the middle had smaller errors than those close to the edge. For each tread, the points close to the middle had smaller errors than those close to the edge. Errors in the reference plane were the worst, but still followed this regularity. This distribution is very similar to the distortions, which explains where the errors mainly come from. The root mean square (RMS) errors in the translation scan and rotation scan were 0.030 and 0.056 mm, respectively. The BPN method seems to perform better than the Tsai’s method on the treads, even with the irregularly distributed errors. However, on the reference plane, the errors turned out to be much bigger suddenly, which was caused by the poor generalization ability of this method. As a result, the overall RMS of the BPN in the translation scan and rotation scan reached 0.027 and 0.060 mm, respectively. The hybrid method achieved the best performance in this experiment. It has the smallest errors when compared with the Tsai’s method and the BPN method. The RMS of the errors was 0.016 and 0.031 mm. The basic model ensures steady errors, even on the reference plane. The separation of ideal values and errors also improves the performance of the network. As shown in Fig. 13, for the network in the hybrid method, the mean squared error (MSE) drops 0.0014612 at epoch 19, while that in the pure BPN method is 0.0014651 at epoch 932. The convergence rate of the MSE is much faster in the hybrid method than in the pure BPN method. The dental mold measurement error is required to be less than 0.2 mm, so our measurement method satisfies the accuracy requirements.

6.2. Measurement

Typical dental pieces measured by means of a translation scan and rotation scan, respectively, are shown in Figs. 15 and 16. The measurement results and reconstruction process are shown in Figs. 17 and 18. Fig. 17(a) is the primary point cloud with a total of 37  983 points, while Fig. 17(b) is the point cloud after de-noising and reduction, with a total number of points that has decreased to 6218. Fig. 17(c) shows the result of the Delaunay triangulation. The final 3D reconstruction is shown in Fig. 17(d). Fig. 18(a) is the primary point cloud, which contains many redundant points. After reduction, the number of points is reduced from 87  458 to 6267, as shown in Fig. 18(b). The Delaunay triangulation and the final 3D reconstruction are shown in Fig. 18(c) and (d). The results meet the requirements for dental application.
Fig. 15 A typical dental piece for a translation scan.

Full size|PPT slide

Fig. 16 A typical dental piece for a rotation scan.

Full size|PPT slide

Fig. 17 Reconstruction of translation scan. (a) The primary point cloud; (b) point cloud after de-noising and reduction; (c) the result of the Delaunay triangulation; (d) the final 3D reconstruction.

Full size|PPT slide

Fig. 18 Reconstruction of rotation scan. (a) The primary point cloud; (b) point cloud after de-noising and reduction; (c) the result of the Delaunay triangulation; (d) the final 3D reconstruction.

Full size|PPT slide

7. Conclusions

This paper developed a dual-platform laser scanner and proposed a hybrid calibration method for 3D laser scanning for the 3D reconstruction of dental pieces. The dual-platform scanner has a low cost and is suitable for different dental pieces. The hybrid calibration, which includes an integrative method for data collection and a hybrid algorithm for data processing, achieves convenient operation and high precision. The integrative method is able to collect a substantial number of accurate calibrating points by means of a stepped gauge and a pattern with little human intervention. The hybrid algorithm synthesizes the advantages of the mathematical and machine-learning methods through the combination of a basic model and a compensation network. The calibration experiments verified the excellent performance of the hybrid calibration, which had strong stability and a small degree of errors. Two typical dental pieces were measured in order to demonstrate the validity of the measurement performed using the dual-platform scanner. This method provides an effective way for the 3D reconstruction of dental pieces in clinical teaching.
The dual-platform laser scanner can also be applied to the 3D measurement of objects with irregular surfaces, such as the reconstruction of sculptures and artifacts, the measurement of complex industrial parts, and rapid reverse engineering combined with 3D printing. However, the dual-platform laser scanner is bulky and not portable. The scanning process is thus limited by the mechanical platform.

Acknowledgements

The authors are grateful for support from the National Science Fund for Excellent Young Scholars (51722509), the National Natural Science Foundation of China (51575440), the National Key R&D Program of China (2017YFB1104700), and the Shaanxi Science and Technology Project (2016GY-011).

Compliance with ethics guidelines

Shuming Yang, Xinyu Shi, Guofeng Zhang, and Changshuo Lv declare that they have no conflict of interest or financial conflicts to disclose.

[1]
Welk A., Rosin M., Seyer D., Splieth C., Siemer M., Meyer G.. German dental faculty attitudes towards computer-assisted learning and their correlation with personal and professional profiles. Eur J Dent Educ. 2005; 9(3): 123-130.

[2]
Munera N., Lora G.J., Garcia-Sucerquia J.. Evaluation of fringe projection and laser scanning for 3D reconstruction of dental pieces. Dyna. 2012; 79(171): 65-73.

[3]
Geng J.. Structured-light 3D surface imaging: a tutorial. Adv Opt Photonics. 2011; 3(2): 128-160.

[4]
Zhou W., Guo H., Li Q., Hong T.. Fine deformation monitoring of ancient building based on terrestrial laser scanning technologies. IOP Conf Ser Earth Environ Sci. 2014; 17: 012166.

[5]
Andersen U.V., Pedersen D.B., Hansen H.N., Nielsen J.S.. In-process 3D geometry reconstruction of objects produced by direct light projection. Int J Adv Manuf Technol. 2013; 68(1–4): 565-573.

[6]
Choi S., Kim P., Boutilier R., Kim M.Y., Lee Y.J., Lee H.. Development of a high speed laser scanning confocal microscope with an acquisition rate up to 200 frames per second. Opt Express. 2013; 21(20): 23611-23618.

[7]
Dewar R.. Self-generated targets for spatial calibration of structured light optical sectioning sensors with respect to an external coordinate system.

[8]
Duan F.J., Liu F.M., Ye S.H.. A new accurate method for the calibration of line structured light sensor. Chin J Sci Instrum. 2000; 21: 108-113. Chinese

[9]
Huynh D.Q., Owens R.A., Hartmann P.E.. Calibration a structured light stripe system: a novel approach. Int J Comput Vis. 1999; 33(1): 73-86.

[10]
Zhou F., Zhang G.. Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations. Image Vis Comput. 2005; 23(1): 59-67.

[11]
Sun Q., Hou Y., Tan Q., Li G.. A flexible calibration method using the planar target with a square pattern for line structured light vision system. PLoS One. 2014; 9(9): e106911.

[12]
Xie Z., Wang X., Chi S.. Simultaneous calibration of the intrinsic and extrinsic parameters of structured-light sensors. Opt Lasers Eng. 2014; 58: 9-18.

[13]
Li J., Chen M., Jin X., Chen Y., Dai Z., Ou Z., . Calibration of a multiple axes 3D laser scanning system consisting of robot, portable laser scanner and turntable. Optik. 2011; 122(4): 324-329.

[14]
Li P., Zhang W., Xiong X.. A fast approach for calibrating 3D coordinate measuring system rotation axis based on line-structure light. Microcomput Appl. 2015; 34: 73-75. Chinese

[15]
Wu Q., Li J., Su X., Hui B.. An approach for calibration rotor position of three-dimensional measurement system for line-structure light. Chin J Lasers. 2008; 35(8): 1224-1227. Chinese

[16]
Chang M., Tai W.C.. 360-deg profile noncontact measurement using a neural network. Opt Eng. 1995; 34(12): 3572-3577.

[17]
Dipanda A., Woo S., Marzani F., Bilbault J.M.. 3D shape reconstruction in an active stereo vision system using genetic algorithms. Patt Recog. 2003; 36(9): 2143-2159.

[18]
Zhao Y., Ren H., Xu K., Hu J.. Method for calibrating intrinsic camera parameters using orthogonal vanishing points. Opt Eng. 2016; 55(8): 084106.

[19]
Tsai R.Y.. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J Robot Autom. 1987; 3(4): 323-344.

[20]
Zhang Z.. A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell. 2000; 22(11): 1330-1334.

[21]
Li X.W., Cho S.J., Kim S.T.. Combined use of BP neural network and computational integral imaging reconstruction for optical multiple-image security. Opt Commun. 2014; 315(6): 147-158.

[22]
Wei P., Cheng C., Liu T.. A photonic transducer-based optical current sensor using back-propagation neural network. IEEE Photonics Technol Lett. 2016; 28(14): 1513-1516.

[23]
Zhang Y., Liu W., Li X., Yang F., Gao P., Jia Z.. Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system. Opt Eng. 2015; 54(10): 105108.

[24]
Watson D.F.. Computing the n-dimensional Delaunay tessellation with applications to Voronoi polytopes. Comput J. 1981; 24(2): 167-172.

Outlines

/