Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-22T09:53:35.713Z Has data issue: false hasContentIssue false

Quality inspection and error correction of fork-ear type wing-fuselage docking assembly based on multi-camera stereo vision

Published online by Cambridge University Press:  03 December 2024

Y.G. Zhu*
Affiliation:
Department of Aeronautical Manufacturing and Mechanical Engineering, Nanchang HangKong University, Nanchang, China
D. Li
Affiliation:
Department of Aeronautical Manufacturing and Mechanical Engineering, Nanchang HangKong University, Nanchang, China
Y. Wan
Affiliation:
Department of Aeronautical Manufacturing and Mechanical Engineering, Nanchang HangKong University, Nanchang, China
Y.F. Wang
Affiliation:
Department of Aeronautical Manufacturing and Mechanical Engineering, Nanchang HangKong University, Nanchang, China
Z.Z. Bai
Affiliation:
AVIC Shanxi Aircraft Industry Corporation LTD, Hanzhong, China
W. Cui
Affiliation:
Department of Aeronautical Manufacturing and Mechanical Engineering, Nanchang HangKong University, Nanchang, China
*
Corresponding author: Y.G. Zhu; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

During the automatic docking assembly of aircraft wing-fuselage, using monocular camera or dual-camera to monitor the docking stage of the fork-ear will result in an incomplete identification of the fork-ear pose-position and an inaccurate description of the deviation in the intersection holes’ position coordinates. To address this, a quality inspection and error correction method is proposed for the fork-ear docking assembly based on multi-camera stereo vision. Initially, a multi-camera stereo vision detection system is established to inspect the quality of fork-ear docking assembly. Subsequently, a spatial position solution mathematical model of the fork-ear feature points is developed, and a spatial pose determination mathematical model of fork-ear is established by utilised the elliptical cone. Finally, an enhanced artificial fish swarm particle filter algorithm is proposed to track and estimate the coordinate of the fork-ear feature points. An adaptive weighted fusion algorithm is employed to fuse the detection data from the multi-camera and the laser tracker, and a wing pose-position fine-tuning error correction model is constructed. Experimental results demonstrate that the method enhances the effect of the assembly quality inspection and effectively improves the wing-fuselage docking assembly accuracy of the fork-ear type aircraft.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Royal Aeronautical Society

Nomenclature

Clea

clearance between the fork-ear mating surfaces

Coa

coaxiality of the fork-ear intersection holes

x

x-axis

y

y-axis

z

z-axis

Δx g

x g direction deviation

Δy g

y g direction deviation

Δz g

z g direction deviation

f cv

focal lengths of the vertical camera

f cl

focal lengths of the left camera

f cr

focal lengths of the right camera

Visual min

minimum field of visual

Step min

minimum step

m max

maximum iteration number

try max

maximum optimization number

Greek symbol

$\psi$

yaw-angle deviation

$\varphi$

pitch-angle deviation

θ

rolling-angle deviation

$\delta $

congestion factor

$\alpha$

rotation angles of x-axis

$\beta$

rotation angles of y-axis

$\gamma$

rotation angles of z-axis

$\varGamma$

translation vector

$\varTheta$

noise matrix

$\phi$

rotation angle between the inter section holes of the fork joint and the ear joint

Subscripts/Superscripts

g

docking assembly coordinate system

w

world coordinate system

c

camera

cv

vertical camera

cl

left camera

cr

right camera

las

laser tracker

th

theoretical values

tar

target values

Abbreviations

LAFSA-PF

local neighbourhood adaptive artificial fish swarm algorithm optimisation particle filter

1.0 Introduction

With the advent of high-precision digital measurement equipment such as laser radar, laser tracker and iGPS [Reference Mei and Maropoulos1Reference Wang, Liu, Chen, Xie, Zhang and Wang3], the trajectory of aircraft assembly has veered towards digitisation and automation [Reference Trabasso and Mosqueira4, Reference Mei, Wang and Zhu5]. Consequently, the quality of air-craft assembly has witnessed significant enhancement [Reference Maropoulos, Muelaner, Summers and Martin6, Reference Zhang, Zheng, Yu, Wang and Ke7]. Particularly, the quality of aircraft wing-fuselage docking plays a pivotal role in determining the overall assembly accuracy and the aircraft’s service life [Reference Yu and Du8Reference Wei, Jiang and Xiao10]. The predominant wing-fuselage junction methods are the frame type and the fork-ear type [Reference Cui and Du11, Reference Cui and Du12]. Owing to its superior connection efficacy and ease of maintenance, the fork-ear type wing-fuselage junction method is widely used in trainer and fighter aircraft.

However, in the docking stage of the fork-ear type wing-fuselage, when using a laser tracker to evaluate the docking quality, challenges arise due to the wing occlusion. This occlusion prevents the direct measurement of the characteristics of the intersection holes of the fork-ear with the laser tracker, resulting in measurements that lack intuitive-ness and precision. With the continuous improvement of the precision of visual inspection equipment, and the continuous progress of deep learning technology, the application of machine vision technology is becoming increasingly widespread [Reference Zheng, Zhao, Zhang and Wang13Reference Zheng, Tian, Yu, Wang, Elhanashi and Saponara16]. For this reason, several researchers have explored the integration of machine vision to discern the characteristics of fork-ear intersection holes. For instance, Li G et al. [Reference Li, Huang and Li17] proposed a novel self-calibration method using circular points and the Random Sample Consensus (RANSAC) algorithm. Numerical simulations and experiments were performed on the fork-ear intersection holes based on monocular vision, demonstrating the effectiveness of this method in enhancing the detection accuracy of the wing-fuselage docking. However, solely detecting the features of the fork-ear intersection holes isn’t enough to provide a comprehensive assessment of the fork-ear docking quality. There’s also a need to measure the fit clearance of the fork-ear mating surfaces. Addressing this, Zhu Y et al. [Reference Zhu, Zhang, Deng and Liu18] utilised two industrial cameras paired with four laser trackers to dynamically monitor the aircraft wing-fuselage docking assembly process. This approach allows for the detection of both the coaxiality of the intersection holes and the fit clearance of the mating surfaces, enabling dynamic tracking measurements of wing-fuselage docking and real-time deviation corrections. Similarly, Zha Q et al. [Reference Zha, Zhu and Zhang19] introduced a visual and automated wing-fuselage docking method based on heterogeneous equipment. This method, combining an industrial camera with a laser tracker, facilitated the detection of coaxiality, fit clearance and the completion of the automatic docking of aircraft wing-fuselage.

Nevertheless, relying on dual-camera [Reference Wang, Zhang, Chen, Luo and Zheng20, Reference Yang, Dong, Ding, Brighton, Zhan and Zhao21] to oversee the fork-ear docking phase falls short in capturing the precise spatial position of the fork-ear feature points and the intricate contours of the intersection holes. This deficiency hinders a holistic depiction of the fork-ear’s position and orientation deviations, which could compromise the assembly accuracy of wing-fuselage docking. Addressing this, the study introduces multi-camera stereo vision [Reference Xu, Bo and Wang22, Reference Rameau, Park, Bailo and Kweon23], earmarking the fork-ear wing-fuselage docking assembly as the primary research focus. Following the stereo vision calibration of the multi-camera [Reference Liu, Tian, Kuang and Ma24], a multi-camera phase mechanism establishes the stereo vision space point detection model. The spatial pose of the fork-ear is mathematically derived using the elliptical cone [Reference Peng, Xu and Yuan25, Reference Liu, Xie, Zhang, Zhao and Liu26] and the Rodriguez formula [Reference Karlgaard and Schaub27, Reference Amrr, Nabi and Iqbal28]. Subsequently, a mathematical model for adjusting the wing’s pose is formulated, with the pose-adjustment control points of the wing being diligently tracked and measured by the laser tracker. The data harvested from the multi-camera and laser tracker are integrated using an Adaptive Weighted Fusion Algorithm (AWFA) [Reference Zhang, Guo and Zhang29, Reference Cong and Yu30]. Concurrently, the study proposes a Local Neighbourhood Adaptive Artificial Fish Swarm Algorithm Optimisation Particle Filter (LAFSA-PF) to track and forecast the fork-ear feature points, rectifying any detection inaccuracies. In addition, EDLines [Reference Akinlar and Topal31] and EDCircles [Reference Akinlar and Topal32] are deployed to real-time monitor the fit clearance and coaxiality of the fork-ear, ensuring that the accuracy prerequisites of the fork-ear aircraft wing-fuselage assembly quality inspection are met.

2.0 Fork-ear docking assembly quality inspection system

During the wing-fuselage assembly process, the docking stage of the fork-ear is obscured by the wing and fuselage. This occlusion makes it challenging for the laser tracker to directly measure the characteristics of the intersection holes of the fork-ear. The study of the aircraft wing-fuselage docking assembly in this paper comprises four pairs of fork-ear, with the second pair designated as the primary fork-ear, and the assembly precision requirement for the primary fork-ear is extremely stringent. Provided that the accurate assembly of the primary fork-ear is ensured, the assembly precision of the remaining auxiliary fork-ear will also meet the requirements. Consequently, the multiple fork-ear docking mechanisms can be effectively reduced to the analysis of a singular fork-ear model. As a solution, the fork-ear docking assembly quality inspection system depicted in Fig. 1 is constructed.

Figure 1. Construction of fork-ear type wing-fuselage assembly quality inspection system.

The fork-ear type wing-fuselage docking process is divided into the following two stages:

  1. (1) The stage of wing pose-position adjustment: First, the laser tracker is used for the fuselage pose-position adjustment. Once adjusted, the pose-position of the fuselage is locked. Finally, the laser tracker is employed for the wing pose-position adjustment, ensuring that the ear joint enters the detection range of the multi-camera stereo vision system (Fig. 2).

  2. (2) The stage of fork-ear docking: First, the multi-camera stereo vision system is used to capture and obtain the target feature information of both the fork-ear mating surfaces and the intersection holes. Next, the quality of the fork-ear is assessed. Finally, data from the multi-camera and the laser tracker are fused, leading to the corresponding fine-tuning of the wing pose-position.

The quality parameters for fork-ear docking assembly include the fit clearance Clea 1, Clea 2 between the fork-ear mating surfaces, the coaxiality Coa of the fork-ear intersection holes, the position deviations Δx g , Δy g and Δz g of the fork-ear assembly, the yaw-angle deviation $\psi$ , the pitch-angle deviation $\varphi$ , and the rolling-angle deviation θ. The detailed content of the fork-ear assembly quality inspection is illustrated in Fig. 3.

Figure 2. The multi-camera stereo vision system.

Figure 3. The detailed content of the fork-ear assembly quality inspection.

To better inspect the quality of the fork-ear assembly, a multi-camera stereo vision system is employed in a triangular distribution. This system comprises a vertical camera, a left camera and a right camera. To enhance detection accuracy and achieve the unification of the multi-camera world coordinate system with the docking assembly coordinate system, a circular calibration plate is used to implement stereo vision calibration and global calibration of the laser tracker with the multi-camera stereo vision system. The overall layout of this multi-camera stereo vision system is shown in Fig. 4.

Figure 4. The general layout of the multi-camera stereo vision system.

In Fig. 4, ${o_g} - {x_g}{y_g}{z_g}$ denotes the docking assembly coordinate system for the wing-fuselage docking assembly. o cv , o cl and o cr represent the optical centres of the vertical camera, the left camera and right camera, respectively. o z is the convergence point where the optical axes of the vertical camera, the left camera and the right camera meet.

3.0 Quality determination of fork-ear docking assembly

3.1 Spatial position solution mathematical model of the fork-ear

The fork-ear features include points such as the centre of the fork-ear intersection holes and the fork-ear feature corner points, and so on, which are used to better detect the fork-ear docking assembly quality. Figure 5 displays the spatial position detection coordinate system for these fork-ear feature points, based on multi-camera stereo vision.

Figure 5. The spatial position detection coordinate system of fork-ear feature points.

In Fig. 5, ${o_{cv}} - {x_{cv}}{y_{cv}}{z_{cv}}$ ${o_{cl}} - {x_{cl}}{y_{cl}}{z_{cl}}$ , and ${o_{cr}} - {x_{cr}}{y_{cr}}{z_{cr}}$ respectively, represent the camera coordinate systems of the vertical camera, left camera and right camera. ${O_{cv}} - {X_{cv}}{Y_{cv}}$ , ${O_{cl}} - {X_{cl}}{Y_{cl}}$ and ${O_{cr}} - {X_{cr}}{Y_{cr}}$ represent the image coordinate systems of the vertical camera, left camera and right camera, respectively. ${o_{uv}} - {x_{uv}}{y_{uv}}$ , ${o_{ul}} - {x_{ul}}{y_{ul}}$ and ${o_{ur}} - {x_{ur}}{y_{ur}}$ represent the pixel coordinate systems of the vertical camera, left camera and right camera, respectively ${o_w} - {x_w}{y_w}{z_w}$ . represents the world coordinate system of multi-camera stereo vision system. p is any feature point in the world coordinate system, while P v , P l , and P r are the imaging points on the image coordinate system of the vertical camera, left camera and right camera, respectively. The projection coordinates of p in the vertical camera, left camera and right camera coordinate systems are represented by ${p_v}\left( {{x_v},{y_v},{z_v}} \right)$ , ${p_l}\left( {{x_l},{y_l},{z_l}} \right)$ and ${p_r}\left( {{x_r},{y_r},{z_r}} \right)$ , respectively. The homogeneous coordinates of imaging points of p in the vertical, left camera and right image coordinate systems are denoted by ${P_v}{\rm{ }}\left( {{X_v},{Y_v},1} \right)$ , ${P_l}{\rm{ }}\left( {{X_l},{Y_l},1} \right)$ and ${P_r}{\rm{ }}\left( {{X_r},{Y_r},1} \right)$ , respectively. The pixel homogeneous coordinates corresponding to these imaging points are given by ${p_{uv}}\left( {{x_{uv}},{y_{uv}},1} \right)$ , ${p_{ul}}\left( {{x_{ul}},{y_{ul}},1} \right)$ and ${p_{ur}}\left( {{x_{ur}},{y_{ur}},1} \right)$ , respectively. The pixel coordinates of p can be utilised to determine its corresponding image coordinates, as demonstrated in Equation (1).

(1) \begin{align}\left\{\begin{array}{rcl} {\boldsymbol{p}_{uv} ^{{\rm T}} } & {=} & {\left[\begin{array}{c} {x_{uv} } \\ {y_{uv} } \\ {1} \end{array}\right]=\boldsymbol{U}_{v} \boldsymbol{P}_{v} ^{{\rm T}} =\left[\begin{array}{ccc} {u_{xv} } & {0} & {x_{v0} } \\ {0} & {u_{yv} } & {y_{v0} } \\ {0} & {0} & {1} \end{array}\right]\left[\begin{array}{c} {X_{v} } \\ {Y_{v} } \\ {1} \end{array}\right]} \\ \\[-3pt] {\boldsymbol{p}_{ul} ^{{\rm T}} } & {=} & {\left[\begin{array}{c} {x_{ul} } \\ {y_{ul} } \\ {1} \end{array}\right]=\boldsymbol{U}_{l} \boldsymbol{P}_{l} ^{{\rm T}} =\left[\begin{array}{ccc} {u_{xl} } & {0} & {x_{l0} } \\ {0} & {u_{yl} } & {y_{l0} } \\ {0} & {0} & {1} \end{array}\right]\left[\begin{array}{c} {X_{l} } \\ {Y_{l} } \\ {1} \end{array}\right]} \\ \\[-3pt] {\boldsymbol{p}_{ur} ^{{\rm T}} } & {=} & {\left[\begin{array}{c} {x_{ur} } \\ {y_{ur} } \\ {1} \end{array}\right]=\boldsymbol{U}_{r} \boldsymbol{P}_{r} ^{{\rm T}} =\left[\begin{array}{ccc} {u_{xr} } & {0} & {x_{r0} } \\ {0} & {u_{yr} } & {y_{r0} } \\ {0} & {0} & {1} \end{array}\right]\left[\begin{array}{c} {X_{r} } \\ {Y_{r} } \\ {1} \end{array}\right]} \end{array}\right.\end{align}

Where, ${u_{xv}}{u_{yv}}$ , ${u_{xl}}{u_{yl}}$ and ${u_{xr}}{u_{yr}}$ denote the pixel size of vertical camera, left camera and right camera, respectively. $\left( {{x_{v0}},{y_{v0}}} \right)$ , $\left( {{x_{l0}},{y_{l0}}} \right)$ and $\left( {{x_{r0}},{y_{r0}}} \right)$ represent the coordinates of the origins of the image coordinate systems for the vertical camera, left camera and right camera, respectively, in relation to the pixel coordinate system. The corresponding camera coordinates are determined using ${P_v}{\rm{ }}\left( {{X_v},{Y_v},1} \right)$ , ${P_l}{\rm{ }}\left( {{X_l},{Y_l},1} \right)$ and ${P_r}{\rm{ }}\left( {{X_r},{Y_r},1} \right)$ , as shown in Equation (2).

(2) \begin{align}\left\{ \begin{array}{c}{z_v}{\boldsymbol{P}_v}^{\rm{T}} = {z_v}\left[ {\begin{array}{*{20}{c}}{{X_v}}\\{{Y_v}}\\1\end{array}} \right] = \left[ {{\boldsymbol{F}_{cv}}{\rm{ }}\quad 0} \right]{\boldsymbol{p}_v}^{\rm{T}} = \left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{f_{cv}}} {}& 0& {}0& {}0\\0 {}&{{f_{cv}}}& {}0& {}0\\0& {}0& {}1 {}&0\end{array}} \right]{\left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{x_v}} {}& {{y_v}}& {}{{z_v}}& {}1\end{array}} \right]^{\rm{T}}}\\ \\[-4pt]{z_l}{\boldsymbol{P}_l}^{\rm{T}} = {z_l}\left[ {\begin{array}{*{20}{c}}{{X_l}}\\{{Y_l}}\\1\end{array}} \right] = \left[ {{\boldsymbol{F}_{cl}}{\rm{ }}\quad 0} \right]{\boldsymbol{p}_l}^{\rm{T}} = \left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{f_{cl}}}& {}0& {}0& {}0\\0& {}{{f_{cl}}}& {}0& {}0\\0& {}0& {}1& {}0\end{array}} \right]{\left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{x_l}}& {}{{y_l}}& {}{{z_l}}& {}1\end{array}} \right]^{\rm{T}}}\\ \\[-4pt]{z_r}{\boldsymbol{P}_r}^{\rm{T}} = {z_r}\left[ {\begin{array}{*{20}{c}}{{X_r}}\\{{Y_r}}\\1\end{array}} \right] = \left[ {{\boldsymbol{F}_{cr}}{\rm{ }}\quad 0} \right]{\boldsymbol{p}_r}^{\rm{T}} = \left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{f_{cr}}}& {}0& {}0& {}0\\0& {}{{f_{cr}}}& {}0& {}0\\0& {}0& {}1& {}0\end{array}} \right]{\left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{x_r}}& {}{{y_r}}& {}{{z_r}} {}&1\end{array}} \right]^{\rm{T}}}\end{array} \right.\end{align}

Where ${f_{cv}}$ , ${f_{cl}}$ and ${f_{cr}}$ denote the focal lengths of the vertical camera, left camera and right camera, respectively. The homogeneous transformation matrices ${}_{cv}^{cl}\boldsymbol{M}$ , ${}_{cl}^{cr}\boldsymbol{M}$ and ${}_{cr}^{cv}\boldsymbol{M}$ represent the rotation and translation transformations between the following coordinate systems:

${}_{cv}^{cl}\boldsymbol{M}$ : Between the vertical camera coordinate system and the left camera coordinate system.

${}_{cl}^{cr}\boldsymbol{M}$ : Between the left camera coordinate system and the right camera coordinate system.

${}_{cr}^{cv}\boldsymbol{M}$ : Between the right camera coordinate system and the vertical camera coordinate system.

(3) \begin{align}\left\{ \begin{array}{c}\boldsymbol{p}_l^{\mathop{\rm T}\nolimits} = {}_{cv}^{cl}\boldsymbol{M}\left[ {\begin{array}{*{20}{c}}{{x_v}}\\{{y_v}}\\{{z_v}}\\1\end{array}} \right] = \left[ {\begin{array}{c@{\quad}c}{{}_{cv}^{cl}\boldsymbol{R}} & {{}_{cv}^{cl}\boldsymbol{\varGamma }}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{x_v}}\\{{y_v}}\\{{z_v}}\\1\end{array}} \right] = \left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{}_{cv}^{cl}{r_{11}}}& {}{{}_{cv}^{cl}{r_{12}}}& {}{{}_{cv}^{cl}{r_{13}}}& {}{{}_{cr}^{cv}{\tau _x}}\\{{}_{cv}^{cl}{r_{21}}} {}& {{}_{cv}^{cl}{r_{22}}}& {}{{}_{cv}^{cl}{r_{23}}} & {{}_{cr}^{cv}{\tau _y}}\\{{}_{cv}^{cl}{r_{31}}} & {{}_{cv}^{cl}{r_{32}}} & {{}_{cv}^{cl}{r_{33}}} & {{}_{cr}^{cv}{\tau _z}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{x_v}}\\{{y_v}}\\{{z_v}}\\1\end{array}} \right]\\ \\[-4pt]\boldsymbol{p}_r^{\mathop{\rm T}\nolimits} = {}_{cl}^{cr}\boldsymbol{M}\left[ {\begin{array}{*{20}{c}}{{x_l}}\\{{y_l}}\\{{z_l}}\\1\end{array}} \right] = \left[ {\begin{array}{c@{\quad}c}{{}_{cl}^{cr}\boldsymbol{R}} & {{}_{cl}^{cr}\boldsymbol{\varGamma }}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{x_l}}\\{{y_l}}\\{{z_l}}\\1\end{array}} \right] = \left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{}_{cl}^{cr}{r_{11}}} & {{}_{cl}^{cr}{r_{12}}} & {{}_{cl}^{cr}{r_{13}}} & {{}_{cl}^{cr}{\tau _x}}\\{{}_{cl}^{cr}{r_{21}}} & {{}_{cl}^{cr}{r_{22}}} & {{}_{cl}^{cr}{r_{23}}} & {{}_{cl}^{cr}{\tau _y}}\\{{}_{cl}^{cr}{r_{31}}} & {{}_{cl}^{cr}{r_{32}}} & {{}_{cl}^{cr}{r_{33}}} & {{}_{cl}^{cr}{\tau _z}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{x_l}}\\{{y_l}}\\{{z_l}}\\1\end{array}} \right]\\ \\[-4pt]\boldsymbol{p}_v^{\mathop{\rm T}\nolimits} = {}_{cr}^{cv}\boldsymbol{M}\left[ {\begin{array}{*{20}{c}}{{x_r}}\\{{y_r}}\\{{z_r}}\\1\end{array}} \right] = \left[ {\begin{array}{c@{\quad}c}{{}_{cr}^{cv}\boldsymbol{R}} & {{}_{cr}^{cv}\boldsymbol{\varGamma }}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{x_r}}\\{{y_r}}\\{{z_r}}\\1\end{array}} \right] = \left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{}_{cr}^{cv}{r_{11}}} & {{}_{cr}^{cv}{r_{12}}} & {{}_{cr}^{cv}{r_{13}}} & {{}_{cr}^{cv}{\tau _x}}\\{{}_{cr}^{cv}{r_{21}}} & {{}_{cr}^{cv}{r_{22}}} & {{}_{cr}^{cv}{r_{23}}} & {{}_{cr}^{cv}{\tau _y}}\\{{}_{cr}^{cv}{r_{31}}} & {{}_{cr}^{cv}{r_{32}}} & {{}_{cr}^{cv}{r_{33}}} & {{}_{cr}^{cv}{\tau _z}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{x_r}}\\{{y_r}}\\{{z_r}}\\1\end{array}} \right]\end{array} \right.\end{align}

Where ${}_{cv}^{cl}\boldsymbol{R}$ , ${}_{cl}^{cr}\boldsymbol{R}$ and ${}_{cr}^{cv}\boldsymbol{R}$ represent the rotation matrix between the coordinate systems of each camera, respectively; ${}_{cv}^{cl}\boldsymbol{\varGamma }$ , ${}_{cl}^{cr}\boldsymbol{\varGamma }$ and ${}_{cr}^{cv}\boldsymbol{\varGamma }$ denote the translation vectors between the coordinate systems of each camera. The parameters ${}_{cv}^{cl}\boldsymbol{R}$ , ${}_{cl}^{cr}\boldsymbol{R}$ , ${}_{cr}^{cv}\boldsymbol{R}$ , ${}_{cv}^{cl}\boldsymbol{\varGamma }$ , ${}_{cl}^{cr}\boldsymbol{\varGamma }$ and ${}_{cr}^{cv}\boldsymbol{\varGamma }$ are determined through multi-camera stereo vision calibration. By combining Equation (3) with Equation (2), the relationship between the two image points of in the image coordinate systems of both the left camera and right camera can be solved, as shown in Equation (4).

(4) \begin{align}\begin{array}{c}{z_r}{\boldsymbol{P}_r}^{\rm{T}} = {\boldsymbol{F}_{cr}}{}_{cl}^{cr}\boldsymbol{M}{\left[ {\begin{array}{c@{\quad}c}{{\boldsymbol{F}_{cl}}} & 0\end{array}} \right]^{ - 1}}{z_l}{\boldsymbol{P}_l}^{\rm{T}}\\[5pt] = \left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{}_{cl}^{cr}{r_{11}}{f_{cr}}} & { - {}_{cl}^{cr}{r_{12}}{f_{cr}}_2} & {{}_{cl}^{cr}{r_{13}}{f_{cr}}} & {{}_{cl}^{cr}{\tau _x}{f_{cr}}}\\[5pt] {{}_{cl}^{cr}{r_{21}}{f_{cr}}} & {{}_{cl}^{cr}{r_{22}}{f_{cr}}} & {{}_{cl}^{cr}{r_{23}}{f_{cr}}} & {{}_{cl}^{cr}{\tau _y}{f_{cr}}}\\[5pt] {{}_{cl}^{cr}{r_{31}}} & {{}_{cl}^{cr}{r_{32}}} & {{}_{cl}^{cr}{r_{33}}} & {{}_{cl}^{cr}{\tau _z}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{\frac{{{X_l}{z_l}}}{{{f_{cl}}}}}\\[5pt] {\frac{{{Y_l}{z_l}}}{{{f_{cl}}}}}\\[5pt]{{z_l}}\\[5pt] 1\end{array}} \right]\end{array}\end{align}

The relationship between the image points from any two cameras within the multi-camera system can be determined using the aforementioned method. Through Equation (4), we can derive z l , as shown in Equation (5).

(5) \begin{align}{z_l} = \frac{{{f_{cl}}\left( {{}_{cl}^{cr}{\tau _y}{f_{cr}} - {}_{cl}^{cr}{\tau _z}{Y_{cr}}} \right)}}{{{Y_{cr}}\left( {{}_{cl}^{cr}{r_{31}}{X_{cl}} + {}_{cl}^{cr}{r_{32}}{Y_{il}} + {}_{cl}^{cr}{r_{33}}{f_{cl}}} \right) - {f_{cr}}\left( {{}_{cl}^{cr}{r_{21}}{X_{cl}} + {}_{cl}^{cr}{r_{22}}{Y_{cl}} + {}_{cl}^{cr}{r_{23}}{f_{cl}}} \right)}}\end{align}

The homogeneous transformation matrix representing the relationship between the left camera and the world coordinate system is denoted by ${}_{cl}^w\boldsymbol{M}$ , as illustrated in Equation (6).

(6) \begin{align}{}_{cl}^w\boldsymbol{M} = \left[ {\begin{array}{c@{\quad}c}{{}_{cl}^w\boldsymbol{R}} & {{}_{cl}^w\boldsymbol{\varGamma }}\end{array}} \right] = \left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{}_{cl}^w{r_{11}}} & {{}_{cl}^w{r_{12}}} & {{}_{cl}^w{r_{13}}} & {{}_{cl}^w{\tau _x}}\\{{}_{cl}^w{r_{21}}} & {{}_{cl}^w{r_{22}}} & {{}_{cl}^w{r_{23}}} & {{}_{cl}^w{\tau _y}}\\{{}_{cl}^w{r_{31}}} & {{}_{cl}^w{r_{32}}} & {{}_{cl}^w{r_{33}}} & {{}_{cl}^w{\tau _z}}\end{array}} \right]\end{align}

Therefore, utilising the coordinate transformation relationship and combining Equation (5) and Equation (6), Equation (7) can be obtained to solve the position coordinates ${p_w}\left( {{x_w},{y_w},{z_w}} \right)$ of point p in the world coordinate system of the multi-camera stereo vision system.

(7) \begin{align}\left\{ \begin{array}{l}{x_w} = \dfrac{{{}_{cl}^w{r_{11}}{X_l}{z_l}}}{{{f_{cl}}}} + \dfrac{{{}_{cl}^w{r_{12}}{Y_l}{z_l}}}{{{f_{cl}}}} + {}_{cl}^w{r_{13}}{z_l} + {}_{cl}^w{\tau _x}\\[9pt] {y_w} = \dfrac{{{}_{cl}^w{r_{21}}{Y_l}{z_l}}}{{{f_{cl}}}} + \dfrac{{{}_{cl}^w{r_{22}}{Y_l}{z_l}}}{{{f_{cl}}}} + {}_{cl}^w{r_{23}}{z_l} + {}_{cl}^w{\tau _y}\\[9pt] {z_w} = \dfrac{{{}_{cl}^w{r_{31}}{X_l}{z_l}}}{{{f_{cl}}}} + {}_{cl}^w{r_{33}}{z_l} + {}_{cl}^w{\tau _z}\end{array} \right.\end{align}

For any pair of cameras within the multi-camera stereo vision system, the position coordinates of the stereo feature points in the world coordinate system can be determined using the method described above. The homogeneous transformation matrix that relates the world coordinate system to the docking assembly global coordinate system is represented ${}_w^g\boldsymbol{M}$ by, as illustrated in Equation (8).

(8) \begin{align}{}_w^g\boldsymbol{M} = \left[ {\begin{array}{c@{\quad}c}{{}_w^g\boldsymbol{R}} & {{}_w^g\boldsymbol{\varGamma }}\end{array}} \right] = \left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{\mathop{\rm c}\nolimits} \beta {\mathop{\rm c}\nolimits} \gamma } & {{\mathop{\rm s}\nolimits} \alpha s\beta c\gamma - c\alpha s\gamma } & {{\mathop{\rm c}\nolimits} \alpha {\mathop{\rm c}\nolimits} \gamma {\mathop{\rm s}\nolimits} \beta + {\mathop{\rm s}\nolimits} \alpha {\mathop{\rm s}\nolimits} \gamma } & {{}_w^g{\tau _x}}\\{{\mathop{\rm s}\nolimits} \gamma {\mathop{\rm c}\nolimits} \beta } & {{\mathop{\rm s}\nolimits} \alpha {\mathop{\rm s}\nolimits} \beta s\gamma + c\alpha {\mathop{\rm c}\nolimits} \gamma } & {{\mathop{\rm c}\nolimits} \alpha {\mathop{\rm s}\nolimits} \beta {\mathop{\rm s}\nolimits} \gamma - {\mathop{\rm s}\nolimits} \alpha {\mathop{\rm c}\nolimits} \gamma } & {{}_w^g{\tau _y}}\\{ - {\mathop{\rm s}\nolimits} \beta } & {{\mathop{\rm s}\nolimits} \alpha {\mathop{\rm c}\nolimits} \beta } & {{\mathop{\rm c}\nolimits} \alpha {\mathop{\rm c}\nolimits} \beta } & {{}_w^g{\tau _z}}\end{array}} \right]\end{align}

Where $\alpha$ , $\beta$ and $\gamma$ represent the rotation angles of the world coordinate system relative to the global coordinate system around the x-axis, y-axis and z-axis, respectively. By combining Equation (7) and Equation (8), the spatial coordinates of the fork-ear feature points ${p_g}\left( {{x_g},{y_g},{z_g}} \right)$ in the global coordinate system during assembly can be determined, as presented in Equation (9).

(9) \begin{align}{\boldsymbol{p}}_g^{\rm{T}} = \left[ {\begin{array}{*{20}{c}}{{x_g}}\\{{y_g}}\\{{z_g}}\end{array}} \right] = {}_w^g{\boldsymbol{M}}{\left[ {\begin{array}{*{20}{c}}{{x_w}}\quad {}{{y_w}}\quad {}{{z_w}}\quad {}1\end{array}} \right]^{\rm{T}}}\end{align}

The coordinates ${\rm{ }}{p_{g1}}\left( t \right) = \left( {{x_{g1}}\left( t \right),{y_{g1}}\left( t \right),{z_{g1}}\left( t \right)} \right)$ and ${\rm{ }}{p_{g2}}\left( t \right) = \left( {{x_{g2}}\left( t \right),{y_{g2}}\left( t \right),{z_{g2}}\left( t \right)} \right)$ representing the centres of the intersection holes in the fork joint and the ear joint at time t in the assembly global coordinate system can be determined using Equation (9). Due to the pose locking of the fuselage, the pose of the fork joint remains static, leading to a fixed coordinate ${\rm{ }}{p_{g1}}\left( t \right)$ . By setting ${\rm{ }}{p_{g1}}\left( t \right) = \left( {{x_{g1}},{y_{g1}},{z_{g1}}} \right)$ , ${\rm{ }}Coa\left( t \right){\rm{ }}$ the axis of the intersection hole of the fork joint is used as the reference axis to solve the coaxialityof the intersection hole of the fork-ear at time t, as shown in Equation (10).

(10) \begin{align}Coa\left( t \right) = 2\sqrt {\Delta x{{\left( t \right)}^2} + \Delta z{{\left( t \right)}^2}} = 2\sqrt {{{\left( {{x_{g1}} - {x_{g2}}\left( t \right)} \right)}^2} + {{\left( {{z_{g1}} - {z_{g2}}\left( t \right)} \right)}^2}} \end{align}

The spatial position deviations of the intersection hole centre of the ear-type joint at time t are $\Delta {x_{o2}}\left( t \right)$ , $\Delta {y_{o2}}\left( t \right)$ and $\Delta {z_{o2}}\left( t \right)$ , as shown in Equation (11).

(11) \begin{align}\left\{ \begin{array}{l}\Delta {x_{o2}}\left( t \right) = x_{o2}^{tar} - {x_{o2}}\left( t \right)\\[4pt] \Delta {y_{o2}}\left( t \right) = y_{o2}^{tar} - {y_{o2}}\left( t \right)\\[4pt] \Delta {z_{o2}}\left( t \right) = z_{o2}^{tar} - {z_{o2}}\left( t \right)\end{array} \right.\end{align}

Where $x_{o2}^{tar}$ , $y_{o2}^{tar}$ and $z_{o2}^{tar}$ , denote the target position coordinates of the centre in the closing hole of the ear joint. Meanwhile, the position deviations of the fork-ear involution, represented as Δx g , Δy g and Δz g , correspond to ${\rm{ }}\Delta {x_{o2}}\left( t \right)$ , ${\rm{ }}\Delta {y_{o2}}\left( t \right)$ and ${\rm{ }}\Delta {z_{o2}}\left( t \right)$ , respectively.

3.2 Spatial pose determination mathematical model of fork-ear

When both the left camera and right camera are used to observe the intersection holes of the fork-ear, the unique centre and normal vector of the intersection holes of the fork-ear can be determined. This allows for the calculation of the spatial pose of the fork-ear intersection holes, which represents the pose state of the fork-ear. The spatial circular projection of the fork-ear intersection holes is shown in Fig. 6.

Figure 6. The spatial circular projection of the intersection holes of the fork-ear.

In Fig. 6, the optical centres ${\rm{ }}{o_{cl}}{\rm{ }}$ and ${\rm{ }}{o_{cr}}$ , in conjunction with the projection of the fork-ear intersection hole on the image coordinate system, form two elliptical cones. To simplify the equation for the elliptical cone, the coordinate systems of the left and right cameras are rotated using the rotation matrix ${\rm{ }}{\boldsymbol{R}}_c'{\rm{ }}$ to yield the elliptic cone coordinate systems ${\rm{ }}o_{cl}' - x_{cl}'y_{cl}'z_{cl}'{\rm{ }}$ and ${\rm{ }}o_{cr}' - x_{cr}'y_{cr}'z_{cr}'$ .

The equation for the elliptical projection of the fork-ear intersection hole on the coordinate planes of the left and right camera images is presented in Equation (12).

(12) \begin{align}{b_1}X_c^2 + {b_2}{X_c}{Y_c} + {b_3}Y_c^2 + {b_4}{X_c} + {b_5}{Y_c} + {b_6} = 0\end{align}

Where c=cl=cr. The elliptic cone Eq. can be obtained by combining Equation (12) with Equation (2):

(13) \begin{align}{b_1}f_c^2x_c^2 + {b_2}f_c^2{x_c}{y_c} + {b_3}f_c^2y_c^2 + {b_4}f_c^2{x_c}{z_c} + {b_5}f_c^2{y_c}{z_c} + {b_6}z_c^2 = 0\end{align}

Equation (13) can be rewritten as Equation (14).

(14) \begin{align}\left[ {\begin{array}{c@{\quad}c@{\quad}c}{{x_c}} & {{y_c}} & {{z_c}}\end{array}} \right]\left[ {\begin{array}{c@{\quad}c@{\quad}c}{{b_1}f_c^2} & {\dfrac{{{b_2}}}{2}f_c^2} & {\dfrac{{{b_4}}}{2}f_c^2}\\[9pt] {\dfrac{{{b_2}}}{2}f_c^2} & {{b_3}f_c^2} & {\dfrac{{{b_5}}}{2}f_c^2}\\[9pt] {\dfrac{{{b_4}}}{2}f_c^2} & {\dfrac{{{b_5}}}{2}f_c^2} & {{b_6}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{x_c}}\\{{y_c}}\\{{z_c}}\end{array}} \right] = \left[ {\begin{array}{c@{\quad}c@{\quad}c}{{x_c}} & {{y_c}} & {{z_c}}\end{array}} \right]{\boldsymbol{B}}\left[ {\begin{array}{*{20}{c}}{{x_c}}\\{{y_c}}\\{{z_c}}\end{array}} \right] = 0\end{align}

The relationship between the elliptical cone coordinate system and its corresponding camera coordinate is:

(15) \begin{align}\left[ {\begin{array}{*{20}{c}}{{x_c}}\\{{y_c}}\\{{z_c}}\end{array}} \right] = {\boldsymbol{R}}\left[ {\begin{array}{*{20}{c}}{x'_{\!\!c}}\\{y'_{\!\!c}}\\{z'_{\!\!c}}\end{array}} \right] = \left[ {\begin{array}{c@{\quad}c@{\quad}c}{{\mathop{\rm c}\nolimits} {\beta '}{\mathop{\rm c}\nolimits} {\gamma '}} & {{\mathop{\rm s}\nolimits} {\alpha '}s{\beta '}c{\gamma '} - c{\alpha '}s{\gamma '}} & {{\mathop{\rm c}\nolimits} {\alpha '}{\mathop{\rm c}\nolimits} {\gamma '}{\mathop{\rm s}\nolimits} {\beta '} + {\mathop{\rm s}\nolimits} {\alpha '}{\mathop{\rm s}\nolimits} {\gamma '}}\\{{\mathop{\rm s}\nolimits} {\gamma '}{\mathop{\rm c}\nolimits} {\beta '}} & {{\mathop{\rm s}\nolimits} {\alpha '}{\mathop{\rm s}\nolimits} {\beta '}s{\gamma '} + c{\alpha '}{\mathop{\rm c}\nolimits} {\gamma '}} & {{\mathop{\rm c}\nolimits} {\alpha '}{\mathop{\rm s}\nolimits} {\beta '}{\mathop{\rm s}\nolimits} {\gamma '} - {\mathop{\rm s}\nolimits} {\alpha '}{\mathop{\rm c}\nolimits} {\gamma '}}\\{ - {\mathop{\rm s}\nolimits} {\beta '}} & {{\mathop{\rm s}\nolimits} {\alpha '}{\mathop{\rm c}\nolimits} {\beta '}} & {{\mathop{\rm c}\nolimits} {\alpha '}{\mathop{\rm c}\nolimits} {\beta '}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{x'_{\!\!c}}\\{y'_{\!\!c}}\\{z'_{\!\!c}}\end{array}} \right]\end{align}

Where ${\alpha '}$ , ${\beta '}$ and ${\gamma '}$ ,andrepresent the rotation angles of the camera coordinate systems relative to the elliptic cone coordinate systems around the x-axis, y-axis and z-axis, respectively. Substituting Equation (15) into Eqinto (14), Equation (16) can be obtained.

(16) \begin{align}\left[ {\begin{array}{*{20}{c}}{x'_{\!\!c}}\quad {}{y'_{\!\!c}}\quad {}{z'_{\!\!c}}\end{array}} \right]{\boldsymbol{R}'}{_{\!\!c}^{\rm{T}}}{\boldsymbol{BR}'}_{\!\!c}\left[ {\begin{array}{*{20}{c}}{x'_{\!\!c}}\\{y'_{\!\!c}}\\{z'_{\!\!c}}\end{array}} \right] = 0\end{align}

Since ${\boldsymbol{B}}$ is a real symmetric matrix as depicted in Equation (16), it can be converted into a diagonal matrix ${\boldsymbol{A}}$ using the orthogonal matrix, such as Equation (17).

(17) \begin{align}{{\boldsymbol{A}}^{ - 1}}{\boldsymbol{BA}} = {{\boldsymbol{A}}^{\rm{T}}}{\boldsymbol{BA}} = \left[ {\begin{array}{c@{\quad}c@{\quad}c}{{\lambda _{1c}}} & {} & {}\\{} & {{\lambda _{2c}}} & {}\\{} & {} & {{\lambda _{3c}}}\end{array}} \right]\end{align}

Equation (17) can be rewritten in standard form as Equation (18).

(18) \begin{align}{\lambda _{1c}}x{_c'^2} + {\lambda _{2c}}y{_c'^2} + {\lambda _{3c}}z{_c'^2} = 0\end{align}

In the elliptical cone coordinate system, the left and right cameras can derive two distinct normal vectors for the fork-ear intersection holes, as illustrated in Equation (19).

(19) \begin{align}\left\{ \begin{array}{l}\boldsymbol{n}'_{\!\!\!l1} = \left[ {\begin{array}{c@{\quad}c@{\quad}c}{\sqrt {\dfrac{{\left| {{\lambda _{1cl}}} \right| - \left| {{\lambda _{2cl}}} \right|}}{{\left| {{\lambda _{1cl}}} \right| + \left| {{\lambda _{3cl}}} \right|}}} } & 0 & { - \sqrt {\dfrac{{\left| {{\lambda _{2cl}}} \right| + \left| {{\lambda _{3cl}}} \right|}}{{\left| {{\lambda _{1cl}}} \right| + \left| {{\lambda _{3cl}}} \right|}}} }\end{array}} \right]\\[10pt] \boldsymbol{n}'_{\!\!\!l2} = \left[ {\begin{array}{c@{\quad}c@{\quad}c}{ - \sqrt {\dfrac{{\left| {{\lambda _{1cl}}} \right| - \left| {{\lambda _{2cl}}} \right|}}{{\left| {{\lambda _{1cl}}} \right| + \left| {{\lambda _{3cl}}} \right|}}} } & 0 & { - \sqrt {\dfrac{{\left| {{\lambda _{2cl}}} \right| + \left| {{\lambda _{3cl}}} \right|}}{{\left| {{\lambda _{1cl}}} \right| + \left| {{\lambda _{3cl}}} \right|}}} }\end{array}} \right]\\[10pt] \boldsymbol{n}'_{\!\!\!r1} = \left[ {\begin{array}{c@{\quad}c@{\quad}c}{\sqrt {\dfrac{{\left| {{\lambda _{1cr}}} \right| - \left| {{\lambda _{2cr}}} \right|}}{{\left| {{\lambda _{1cr}}} \right| + \left| {{\lambda _{3cr}}} \right|}}} } & 0 & { - \sqrt {\dfrac{{\left| {{\lambda _{2cr}}} \right| + \left| {{\lambda _{3cr}}} \right|}}{{\left| {{\lambda _{1cr}}} \right| + \left| {{\lambda _{3cr}}} \right|}}} }\end{array}} \right]\\[10pt] \boldsymbol{n}'_{\!\!\!r2} = \left[ {\begin{array}{c@{\quad}c@{\quad}c}{ - \sqrt {\dfrac{{\left| {{\lambda _{1cr}}} \right| - \left| {{\lambda _{2cr}}} \right|}}{{\left| {{\lambda _{1cr}}} \right| + \left| {{\lambda _{3cr}}} \right|}}} } & 0 & { - \sqrt {\dfrac{{\left| {{\lambda _{2cr}}} \right| + \left| {{\lambda _{3cr}}} \right|}}{{\left| {{\lambda _{1cr}}} \right| + \left| {{\lambda _{3cr}}} \right|}}} }\end{array}} \right]\end{array} \right.\end{align}

The normal vectors of the fork-ear intersection holes in the world coordinate system are represented as ${}^w{\boldsymbol{n}_{l1}}$ , ${}^w{\boldsymbol{n}_{l2}}$ , ${}^w{\boldsymbol{n}_{r1}}$ and ${}^w{\boldsymbol{n}_{r2}}$ , respectively, as depicted in Equation (19).

(20) \begin{align}\left\{ \begin{array}{l}{}^w{{\boldsymbol{n}}_{l1}} = {}_l^w{\boldsymbol{M}}\left[ {\begin{array}{c@{\quad}c}{{\boldsymbol{R}}_l'} & 0\\0 & 1\end{array}} \right]{\left[ {\begin{array}{c@{\quad}c}{{\boldsymbol{n}}_{l1}'} & 1\end{array}} \right]^{\rm{T}}}\\{}^w{{\boldsymbol{n}}_{l2}} = {}_l^w{\boldsymbol{M}}\left[ {\begin{array}{c@{\quad}c}{{\boldsymbol{R}}_l'} & 0\\0 & 1\end{array}} \right]{\left[ {\begin{array}{c@{\quad}c}{{\boldsymbol{n}}_{l2}'} & 1\end{array}} \right]^{\rm{T}}}\\{}^w{{\boldsymbol{n}}_{r1}} = {}_r^w{\boldsymbol{M}}\left[ {\begin{array}{c@{\quad}c}{{\boldsymbol{R}}_r'} & 0\\0 & 1\end{array}} \right]{\left[ {\begin{array}{c@{\quad}c}{{\boldsymbol{n}}_{r1}'} & 1\end{array}} \right]^{\rm{T}}}\\{}^w{{\boldsymbol{n}}_{r2}} = {}_r^w{\boldsymbol{M}}\left[ {\begin{array}{c@{\quad}c}{{\boldsymbol{R}}_r'} & 0\\0 & 1\end{array}} \right]{\left[ {\begin{array}{c@{\quad}c}{{\boldsymbol{n}}_{r2}'} & 1\end{array}} \right]^{\rm{T}}}\end{array} \right.\end{align}

In Equation (20), there exists only one set of real solutions, and these solutions are parallel. Consequently, the genuine normal vector ${}^w{{\boldsymbol{n}}_{real}}$ for the intersection holes of the fork-ear can be ascertained using Equation (21).

(21) \begin{align}{}^{w} \boldsymbol{n}_{real} =\left\{\begin{array}{l} {{\left({}^{w} \boldsymbol{n}_{l1} +{}^{w} \boldsymbol{n}_{r1} \right)/ 2,{\rm \; }if\angle \left({}^{w} \boldsymbol{n}_{l1}, {}^{w} \boldsymbol{n}_{r1} \right)\le \xi } } \\[5pt] {{\left({}^{w} \boldsymbol{n}_{l1} +{}^{w} \boldsymbol{n}_{r2} \right)/2,{\rm \; }if\angle \left({}^{w} \boldsymbol{n}_{l1}, {}^{w} \boldsymbol{n}_{r2} \right)\le \xi {\rm \; }} } \\[5pt] {{\left({}^{w} \boldsymbol{n}_{l2} +{}^{w} \boldsymbol{n}_{r1} \right)/2,{\rm \; }if\angle \left({}^{w} \boldsymbol{n}_{l2}, {}^{w} \boldsymbol{n}_{r1} \right)\le \xi {\rm \; }} } \\[5pt] {{\left({}^{w} \boldsymbol{n}_{l2} +{}^{w} \boldsymbol{n}_{r2} \right)/2,{\rm \; }if\angle \left({}^{w} \boldsymbol{n}_{l2}, {}^{w} \boldsymbol{n}_{r2} \right)\le \xi {\rm \; }} } \end{array}\right.\end{align}

Where ${\rm{ }}\angle \left( {{}^w{{\boldsymbol{n}}_l},{}^w{{\boldsymbol{n}}_r}} \right)$ represents the angle between the two vectors enclosed in the brackets; $\varepsilon $ denotes the minimum threshold of the vector angle set because the two vectors are not parallel due to the actual measurement error. When combined with Equation (7), the unique normal vector $\mathop n\nolimits_g $ of the intersection hole of the fork ear in the global coordinate system of assembly can be solved. Due to the fuselage pose locking, fork joint pose is fixed, let’s designate the normal vectors of the fork-type joint and the ear-type joint in the global coordinate system of assembly at time t as ${\rm{ }}{{\boldsymbol{n}}_{g1}}\left( t \right) = \left[ {\begin{array}{*{20}{c}}{{n_{x1}}} {}{{n_{y1}}} {}{{n_{z1}}}\end{array}} \right]$ and ${\rm{ }}{{\boldsymbol{n}}_{g2}}\left( t \right) = \left[ {\begin{array}{*{20}{c}}{{n_{x2}}\left( t \right)} {}{{n_{y2}}\left( t \right)} {}{{n_{z2}}\left( t \right)}\end{array}} \right]$ , respectively. The rotation angle ${\rm{ }}\phi \left( t \right)$ between the intersection holes of the fork joint and the ear joint at time t can be computed by Equation (22).

(22) \begin{align}\cos \phi \left( t \right) = \frac{{{{\boldsymbol{n}}_{g1}}\left( t \right){{\boldsymbol{n}}_{g2}}\left( t \right)}}{{\left| {{{\boldsymbol{n}}_{g1}}\left( t \right)} \right|\!\left| {{{\boldsymbol{n}}_{g2}}\left( t \right)} \right|}}\end{align}

According to the definition of vector cross product, the rotation axis of vectors ${\rm{ }}{{\boldsymbol{n}}_{g1}}\left( t \right)$ and ${{\boldsymbol{n}}_{g2}}\left( t \right)$ is denoted by ${\rm{ }}{\boldsymbol{r}}\left( t \right) = \left[ {\begin{array}{*{20}{c}}{{r_x}\left( t \right)} {}{{r_y}\left( t \right)} {}{{r_z}\left( t \right)}\end{array}} \right]$ .

(23) \begin{align}{\boldsymbol{r}}{\left( t \right)^{\rm{T}}} = \left[ {\begin{array}{*{20}{c}}{{r_x}\left( t \right)}\\{{r_y}\left( t \right)}\\{{r_z}\left( t \right)}\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}{{n_{y1}}{n_{z2}}\left( t \right) - {n_{z1}}{n_{y2}}\left( t \right)}\\{{n_{z1}}{n_{x2}}\left( t \right) - {n_{x1}}{n_{z2}}\left( t \right)}\\{{n_{x1}}{n_{y2}}\left( t \right) - {n_{y1}}{n_{x2}}\left( t \right)}\end{array}} \right]\end{align}

The rotation matrix ${}_{o1}^{o2}{\boldsymbol{R}}\left( t \right)$ of the fork joint and the ear joint at time t can be solved by the Rodrigues formula, as shown in Equation (24).

(24) \begin{align}{}_{o1}^{o2}{\boldsymbol{R}}\left( t \right) = \cos \phi {\boldsymbol{E}} + \left( {1 - \cos \phi } \right){\boldsymbol{r}}{\left( t \right)^{\rm{T}}}{\boldsymbol{r}}\left( t \right) + \sin \phi \left[ {\begin{array}{c@{\quad}c@{\quad}c}0 & { - {r_z}\left( t \right)} & {{r_y}\left( t \right)}\\{{r_z}\left( t \right)} & 0 & { - {r_x}\left( t \right)}\\{ - {r_y}\left( t \right)} & {{r_x}\left( t \right)} & 0\end{array}} \right]\end{align}

Where E represents a ${\rm{ }}3 \times 3{\rm{ }}$ identity matrix. Utilising the rotation matrix of the fork joint and the ear joint, the yaw angle $\psi$ , pitch angle $\varphi$ , and rolling angle θ of the fork-ear engagement can be determined.

4.0 Correction of dynamic detection error of fork-ear docking assembly quality

4.1 Pose-position adjustment mathematical modeling of the wing

To achieve greater accuracy of the fork-ear docking assembly, it is necessary to determine the spatial pose-position of the fork-ear and the wing in the docking assembly coordinate system. Therefore, it is essential to fuse the detection results of multi-camera for the fork-ear with the detection results of the laser tracker for the wing pose-position adjustment control points. So it is necessary to determine the pose adjustment mathematical model of the wing.

T is the adjustment time of the wing. ${f_a} = {\left( {{x_a}{\rm{ }}{y_a}{\rm{ }}{z_a}{\rm{ }}{\theta _a}{\rm{ }}{\varphi _a}{\rm{ }}{\psi _a}} \right)^{\rm{{\rm T}}}}$ is the initial pose-position of the wing. ${f_b} = {\left( {{x_b}{\rm{ }}{y_b}{\rm{ }}{z_b}{\rm{ }}{\theta _a}{\rm{ }}{\varphi _a}{\rm{ }}{\psi _b}} \right)^{\rm{{\rm T}}}}$ is the pose-position of the wing after the pose-position adjustment is completed. In order to ensure that the velocity and acceleration of the wing-fuselage docking process are in a gentle state and improve the alignment accuracy, the fifth-order polynomial is used to plan the wing motion trajectory. $f\left( t \right) = {\left( {x{\rm{ }}y{\rm{ }}z{\rm{ }}\;\theta {\rm{ }}\varphi {\rm{ }}\psi } \right)^{\rm{{\rm T}}}}$ is the trajectory prediction equation.

(25) \begin{align}f(t) = {a_0}{t^5} + {a_1}{t^4} + {a_2}{t^3} + {a_3}{t^2} + {a_4}t + {a_5}\end{align}

Substituting the constraint into Equation (25), Equation (26) can be solved.

(26) \begin{align}\left\{ \begin{array}{l}{a_0} = \dfrac{{6\left( {{\varphi _b} - {\varphi _a}} \right)}}{{{T^5}}}\\[9pt] {a_1} = - \dfrac{{15\left( {{\varphi _b} - {\varphi _a}} \right)}}{{{T^4}}}\\[9pt] {a_2} = \dfrac{{10\left( {{\varphi _b} - {\varphi _a}} \right)}}{{{T^3}}}\\[9pt] {a_3} = 0\\{a_4} = 0\\{a_5} = {\varphi _a}\end{array} \right.\end{align}

Let, $\Delta \varphi = {\varphi _b} - {\varphi _a}$ then the Equation (27) can be obtained.

(27) \begin{align}{f_\varphi }\left( t \right) = \frac{{6\Delta \varphi }}{{{T^5}}}{t^5} - \frac{{15\Delta \varphi }}{{{T^4}}}{t^4} + \frac{{10\Delta \varphi }}{{{T^3}}}{t^3} + {\varphi _0}\end{align}

Using the same method of solving Equation (27), the prediction equation of the wing attitude adjustment trajectory before the fork ear is obtained, as shown in Equation (28).

(28) \begin{align}\left\{ \begin{array}{l}{f_x}\left( t \right) = \dfrac{{6\Delta x}}{{{T^5}}}{t^5} - \dfrac{{15\Delta x}}{{{T^4}}}{t^4} + \dfrac{{10\Delta x}}{{{T^3}}}{t^3} + {x_a}\\[9pt] {f_y}\left( t \right) = \dfrac{{6\Delta y}}{{{T^5}}}{t^5} - \dfrac{{15\Delta y}}{{{T^4}}}{t^4} + \dfrac{{10\Delta y}}{{{T^3}}}{t^3} + {y_a}\\[9pt] {f_z}\left( t \right) = \dfrac{{6\Delta z}}{{{T^5}}}{t^5} - \dfrac{{15\Delta z}}{{{T^4}}}{t^4} + \dfrac{{10\Delta z}}{{{T^3}}}{t^3} + {z_a}\\[9pt] {f_\theta }\left( t \right) = \dfrac{{6\Delta \theta }}{{{T^5}}}{t^5} - \dfrac{{15\Delta \theta }}{{{T^4}}}{t^4} + \dfrac{{10\Delta \theta }}{{{T^3}}}{t^3} + {\theta _a}\\[9pt] {f_\psi }\left( t \right) = \dfrac{{6\Delta \psi }}{{{T^5}}}{t^5} - \dfrac{{15\Delta \psi }}{{{T^4}}}{t^4} + \dfrac{{10\Delta \psi }}{{{T^3}}}{t^3} + {\psi _a}\end{array} \right.\end{align}

According to the Equation (26), the theoretical coordinate values of the fork-ear feature points and the pose-adjustment control points ${p_i}\left( t \right)$ of wing at time t are solved.

(29) \begin{align}{\boldsymbol{p}}_i^{th}\left( t \right) = {}_a^b{\boldsymbol{R}}\left( t \right)p_i^a\left( t \right) + {}_a^b{\boldsymbol{\varGamma }}\left( t \right)\end{align}

Where ${\rm{ }}{}_a^b{\boldsymbol{R}}\left( t \right) = \left[ {\begin{array}{c@{\quad}c@{\quad}c}{{\mathop{\rm c}\nolimits} {\theta _t}{\mathop{\rm c}\nolimits} {\psi _t}} & {{\mathop{\rm s}\nolimits} {\varphi _t}c{\theta _t}s{\psi _t} - c{\varphi _t}s{\theta _t}} & {{\mathop{\rm c}\nolimits} {\varphi _t}{\mathop{\rm c}\nolimits} {\theta _t}{\mathop{\rm s}\nolimits} {\psi _t} + {\mathop{\rm s}\nolimits} {\varphi _t}{\mathop{\rm s}\nolimits} {\theta _t}}\\{{\mathop{\rm s}\nolimits} {\theta _t}{\mathop{\rm c}\nolimits} {\psi _t}} & {{\mathop{\rm s}\nolimits} {\varphi _t}{\mathop{\rm s}\nolimits} {\theta _t}s{\psi _t} + c{\varphi _t}{\mathop{\rm c}\nolimits} {\theta _t}} & {{\mathop{\rm c}\nolimits} {\varphi _t}{\mathop{\rm s}\nolimits} {\theta _t}{\mathop{\rm s}\nolimits} {\psi _t} - {\mathop{\rm s}\nolimits} {\varphi _t}{\mathop{\rm c}\nolimits} {\theta _t}}\\{ - {\mathop{\rm s}\nolimits} {\psi _t}} & {{\mathop{\rm s}\nolimits} {\varphi _t}{\mathop{\rm c}\nolimits} {\psi _t}} & {{\mathop{\rm c}\nolimits} {\varphi _t}{\mathop{\rm c}\nolimits} {\psi _t}}\end{array}} \right]$ ; $p_i^a\left( t \right)$ is the coordinate of ${p_i}\left( t \right)$ in the wing coordinate system; ${}_a^b{\boldsymbol{\varGamma }}\left( t \right) = {\left( {\begin{array}{*{20}{c}}{{x_g}\left( t \right)} {}{{y_g}\left( t \right)} {}{{z_g}\left( t \right)}\end{array}} \right)^{\rm{T}}}$ . By using Equation (27), the state equation of the wing pose-position adjustment can be established.

(30) \begin{align}{\boldsymbol{X}}\left( {t + 1} \right) = {}_a^b{{\boldsymbol{R}}_{t,t - 1}}{\boldsymbol{X}}\left( t \right) + {{\boldsymbol{\varTheta }}_{t,t - 1}}{{\boldsymbol{\Lambda }}_{t - 1}}\left( {{t_k}} \right) + {}_a^b{{\boldsymbol{\varGamma }}_{t,t - 1}}\end{align}

Where ${}_a^b{{\boldsymbol{R}}_{t,t - 1}} = {}_a^b{\boldsymbol{R}}\left( t \right) - {}_a^b{\boldsymbol{R}}\left( {t - 1} \right)$ ; ${\boldsymbol{X}}\left( t \right) = {\left( {{\boldsymbol{p}}_i^{th}\left( t \right)} \right)^{\rm{T}}}$ ; ${{\boldsymbol{\varTheta }}_{t,t - 1}}$ is a $3 \times 3$ noise matrix; ${}_a^b{{\boldsymbol{\varGamma }}_{t,t - 1}} = {}_a^b{\boldsymbol{\varGamma }}\left( t \right) - {}_a^b{\boldsymbol{\varGamma }}\left( {t - 1} \right)$ ; ${{\boldsymbol{\Lambda }}_{t - 1}}\left( t \right) = {\left( {\begin{array}{*{20}{c}}{\Delta x\left( {t - 1} \right)} {}{\Delta y\left( {t - 1} \right)} {}{\Delta z\left( {t - 1} \right)}\end{array}} \right)^{\rm{T}}}$ is the dynamic noise at time $t - 1$ .

4.2 Wing pose-position fine-tuning error correction based on LAFSA-PF and AWFA

The docking assembly of the fork-ear is dynamic. There’s a significant time difference, denoted as $\Delta t$ , between the moments the camera captures an image at time ${\rm{ }}{t_0}{\rm{ }}$ and when the image processing concludes with the pose-position adjustment at time ${\rm{ }}{t_1}$ . However, the fork-ear receives the pose-position adjustment value given by the system at time ${\rm{ }}{t_1}$ , which is actually the adjustment needed for the fork-ear’s pose-position at time ${\rm{ }}{t_0}$ .

From Section 4.1, it can be concluded that the wing pose-position adjustment action is a nonlinear non-Gaussian model, and particle filter is needed to solve and predict the model. However, particle filter particle degradation is serious and easy to fall into local optimum, which will affect the real-time and accuracy of prediction. In order to achieve real-time and accurate prediction of the fork-ear involution path, and correct the pose adjustment error value generated within $\Delta t$ time, LAFSA-PF is proposed to track and predict the fork-ear alignment process, and the data of laser tracker and multi-camera are fused to further improve the detection accuracy.

The specific steps of error correction are as follows.

Step (1): During the initial stage of fork-ear docking assembly, the importance of fork-ear feature points is sampled by using the wing pose-position adjustment state equation. This produces N particles denoted as $\left\{ {x_0^i,i = 1,2,3, \ldots, N} \right\}$ , serving as the initial sample set. The importance density function is shown in Equation (31).

(31) \begin{align}\boldsymbol{x}_t^i\sim q\left( {\boldsymbol{x}_t^i|\boldsymbol{x}_{t - 1}^i,{\boldsymbol{z}_t}} \right) = p\left( {\boldsymbol{x}_t^i|\boldsymbol{x}_{t - 1}^i} \right)\end{align}

Step (2): The importance weight of each fork-ear feature point particle is calculated.

(32) \begin{align}w_t^i = w_{t - 1}^i\frac{{p\left( {{z_t}|x_t^i} \right)p\left( {{x_t}|x_{t - 1}^i} \right)}}{{q\left( {x_t^i|x_{0:t - 1}^i,{z_{1:k}}} \right)}}\end{align}

Step (3): The objective function is y:

(33) \begin{align}y = \frac{1}{{{{\left( {2\pi \zeta _v^2} \right)}^{{1/2}}}}}\exp \left[ { - \frac{1}{{2\zeta _v^2}}{{\left( {{z_t} - \hat z_{t|t - 1}^{\left( k \right)}} \right)}^2}} \right]\end{align}

Where ${z_t}$ is the latest measurement value of the fork ear feature point, and $\hat z_{t|t - 1}^{\left( k \right)}$ is the predicted measurement value; $\zeta _v^2$ is the noise measurement covariance of particles.

Step (4): The local neighbourhood, adaptive visual and step are introduced to optimise the artificial fish swarm:

At time t, the state of the ith artificial fish is defined by ${\rm{ }}\boldsymbol{x}_t^i = \left( {x_t^1,\,x_t^2, \cdot \cdot \cdot, x_t^{\rm{n}}} \right)$ . The algorithm is set with an adaptive visual of ${\rm{ }}Visua{l_i}$ , an adaptive step of ${\rm{ }}Ste{p_i}$ , and a congestion factor of $\delta $ . The corresponding food concentration for this state is represented by $\boldsymbol{Y}_t^i = f\left( {\boldsymbol{x}_t^i} \right)$ . Let the distance between any two fish i and j, be represented as $di{s_{i,j}} = \left\| {x_t^i - x_t^j} \right\|$ .The local neighbourhood is $N = \left\{ {\boldsymbol{x}_t^j|di{s_{i,j}} \lt Visual} \right\}$ , the number of neighbour artificial fish is ${n_{fish}} = 5$ .

Using adaptive visual and step, when all artificial fish are within a local neighbourhood structure, the adaptive visual $Visual_t^i$ and the adaptive step $Step_t^i$ for the artificial fish at time t can be determined as follows:

(34) \begin{align}\left\{ \begin{array}{l}Visual_t^i = {K_1}\alpha \left( m \right)\frac{1}{5}\sum\limits_{j = 1}^5 {{d_{i,j}} + Visua{l_{\min }}} \\Step_t^i = \frac{1}{8}\alpha \left( m \right)Visual_t^i + Ste{p_{\min }}\\\alpha \left( m \right) = \exp \left( { - {K_2}{{\left( {{m/{{m_{\max }}}}} \right)}^2}} \right)\end{array} \right.\end{align}

Where ${K_1}$ and ${K_2}$ represent the limiting factors, which are constants between 0 and 1; $Visua{l_{\min }}$ and $Ste{p_{\min }}$ represent the minimum field of visual and the minimum step, respectively; $\alpha \left( m \right)$ is a function which decreases with the increase of the number of iterations m.; ${m_{\max }}$ is the maximum number of iterations.

Step (5): Artificial fish swarm particles optimised by local neighbourhood adaptive algorithm:

Swarming behaviour, at time t, the centre position of the neighbourhood structure of the ith artificial fish is $\boldsymbol{Cen}_t^i = \left( {cen_t^1,cen_t^2,cen_t^3, \cdot \cdot \cdot, cen_t^n} \right)$

(35) \begin{align}cen_t^i = \frac{1}{6}\sum\limits_{fish = 1}^6 {{x_{fis{h_k}}}} \end{align}

Where $fis{h_k}$ represents the kth neighbour artificial fish within the local neighbourhood of the ith artificial fish. Suppose that the food concentration ${\rm{ }}\boldsymbol{Y}_{Cen}^i = f\left( {\boldsymbol{Cen}_t^i} \right){\rm{ }}$ corresponding to the centre position a at time t. If $\boldsymbol{Y}_{Cen}^i /{{n_{fish}} \gt \delta }\boldsymbol{Y}_t^i$ , then the ith artificial fish moves one step forward to $\boldsymbol{Cen}_t^i$ . Therefore, the position component of the particle at the next moment is calculated according to Equation (36):

(36) \begin{align}x_{t + 1}^i = x_t^i + Rand\left( {} \right)S{\rm{te}}{{\rm{p}}_i}\frac{{cen_t^i - x_t^i}}{{\left\| {\boldsymbol{Cen}_t^i - \boldsymbol{x}_t^i} \right\|}}\end{align}

Where $Rand\left( {} \right)$ is a random real number between 0 and 1. If ${{\boldsymbol{Y}_{Cen}^i}/{{n_{fish}} \lt \delta }}\boldsymbol{Y}_t^i$ is met, the foraging behaviour is performed.

Foraging behaviour: The ith artificial fish randomly searches for a point within the adaptive visual range $\left[ { - Visual_t^i,Visual_t^i} \right]$ that satisfies ${\boldsymbol{Y}_{rand}} \gt {\boldsymbol{Y}_i}$ and then directly moves to that position. If after $tr{y_{\max }}$ optimisation attempts no point satisfying the requirements is found, a random value within the adaptive step size range $\left[ { - Step_t^i,Step_t^i} \right]$ is taken as the displacement increment for the next moment, and then the trailing behaviour is conducted.

Trailing behaviour: Find the artificial fish $fis{h_{best}}$ with the best position in the local neighborhood. Assume that the current state of $fis{h_{best}}$ is $\boldsymbol{x}_{best}^i = \left( {x_{best}^1,x_{best}^2,x_{best}^3, \cdot \cdot \cdot, x_{best}^n} \right)$ , and the corresponding food concentration $\boldsymbol{Y}_{best}^i = f\left( {\boldsymbol{x}_{best}^i} \right)$ . If ${{{\rm{ }}\boldsymbol{Y}_{best}^i}/{{n_{fish}} \gt \delta }}\boldsymbol{Y}_t^i$ is. met, then the ith artificial fish moves one step forward to $fis{h_{best}}$ . Therefore, the particle position component at the next moment is calculated according to Equation (37). If ${{{\rm{ }}\boldsymbol{Y}_{best}^i}/{{n_{fish}} \gt \delta }}\boldsymbol{Y}_t^i$ is met, return to perform foraging behaviour until the final optimisation result is obtained.

(37) \begin{align}x_{t + 1}^i = x_t^i + Rand\left( {} \right)S{\rm{te}}{{\rm{p}}_i}\frac{{x_{best}^i - x_t^i}}{{\left\| {\boldsymbol{Y}_{best}^i - \boldsymbol{x}_t^i} \right\|}}\end{align}

Step (6): The weights of N fork-ear feature point particles obtained by sampling after optimisation are calculated and normalised by the Equation (38):

(38) \begin{align}\tilde w_t^i = \frac{{w_t^i}}{{\sum\limits_{i = 1}^N {w_t^i} }}\end{align}

Step (7): Estimate and output the state of the fork-ear feature points of the notch at time t using Equation (39).

(39) \begin{align}\begin{array}{l}\hat x\left( t \right) = \sum\limits_{i = 1}^N {\tilde w_t^ix_t^i} \\[5pt] {{\hat x}_i}\left( t \right) = {}_b^cR\left( t \right)p_i^a + {}_b^cT\left( t \right)\end{array}\end{align}

Step (8): Comprehensive error correction:

A joint linear deviation data fusion model, integrating multi-camera and laser tracker data, is constructed to correct the data fusion error:

(40) \begin{align}\left\{ \begin{array}{l}{{\boldsymbol{Z}}_{cv}}\left( t \right) = {{\boldsymbol{h}}_{cv}}X\left( t \right) + {{\boldsymbol{\varTheta }}_{cv}}\left( t \right),{{\boldsymbol{\varTheta }}_{cv}}\left( t \right) \sim \left( {0,\sigma _{cv}^2I} \right)\\{{\boldsymbol{Z}}_{cr}}\left( t \right) = {{\boldsymbol{h}}_{cr}}X\left( t \right) + {{\boldsymbol{\varTheta }}_{cr}}\left( t \right),{{\boldsymbol{\varTheta }}_{cr}}\left( t \right) \sim \left( {0,\sigma _{cr}^2I} \right)\\{{\boldsymbol{Z}}_{cl}}\left( t \right) = {{\boldsymbol{h}}_{cl}}X\left( t \right) + {{\boldsymbol{\varTheta }}_{cl}}\left( t \right),{{\boldsymbol{\varTheta }}_{cl}}\left( t \right) \sim \left( {0,\sigma _{cl}^2I} \right)\\{{\boldsymbol{Z}}_{las}}\left( t \right) = {{\boldsymbol{h}}_{las}}X\left( t \right) + {{\boldsymbol{\varTheta }}_{las}}\left( t \right),{{\boldsymbol{\varTheta }}_{las}}\left( t \right) \sim \left( {0,\sigma _{las}^2I} \right)\end{array} \right.\end{align}

where ${{\boldsymbol{h}}_{cv}}$ , ${{\boldsymbol{h}}_{cr}}$ , ${{\boldsymbol{h}}_{cl}}$ , and ${{\boldsymbol{h}}_{las}}$ represent the observation matrix of vertical camera, left camera, right camera and laser tracker, $X\left( t \right)$ respectively; represents the position deviation estimator of the fork-ear feature points; ${{\boldsymbol{\varTheta }}_{cv}}\left( t \right)$ , ${{\boldsymbol{\varTheta }}_{cr}}\left( t \right)$ , ${{\boldsymbol{\varTheta }}_{cl}}\left( t \right)$ , and ${{\boldsymbol{\varTheta }}_{las}}\left( t \right)$ represent the measurement noise vector of each device at time t, respectively; $\sigma _{cv}^2$ , $\sigma _{cr}^2$ , $\sigma _{cl}^2$ , and $\sigma _{las}^2$ represent the variance of each equation, and all are normally distributed.

The AWFA is used to optimise the fusion data of the multi-camera and the laser tracker, and the optimal estimation value ${\boldsymbol{p}}_i^{est}\left( t \right)$ of the coordinate data fusion of the fork-ear feature points at time t without filtering:

(41) \begin{align}\begin{array}{c}\hat{\boldsymbol{h}}\left( t \right) = \left[ {{\boldsymbol{C}}_{las}^{ - 1} + {\boldsymbol{C}}_{cv}^{ - 1} + {\boldsymbol{C}}_{cl}^{ - 1}{\rm{ + }}{\boldsymbol{C}}_{cr}^{ - 1}} \right]\left[ {{\boldsymbol{C}}_{las}^{{\rm{ - }}1}{{\boldsymbol{Z}}_{las}}\left( t \right) + {\boldsymbol{C}}_{cv}^{ - 1}{{\boldsymbol{Z}}_{cv}}\left( t \right) + {\boldsymbol{C}}_{cl}^{ - 1}{{\boldsymbol{Z}}_{cl}}\left( t \right) + {\boldsymbol{C}}_{cr}^{{\rm{ - }}1}{{\boldsymbol{Z}}_{cr}}\left( t \right)} \right]\\[5pt] {\boldsymbol{p}}_i^{est}\left( t \right) = \hat{\boldsymbol{h}}\left( t \right) + {\boldsymbol{p}}_i^{th}\left( t \right)\end{array}\end{align}

where ${{\boldsymbol{C}}_{las}}$ , ${{\boldsymbol{C}}_{cv}}$ , ${{\boldsymbol{C}}_{cl}}$ , and ${{\boldsymbol{C}}_{cr}}$ are the covariance matrices of the laser tracker and the vertical camera, left camera and right camera, respectively (Fig. 7).

Figure 7. Error correction process based on LAFSA-PF and AWFA.

Figure 8. The platform construction of the fork-ear docking assembly quality detection system.

Figure 9. Detection effect of EDLines.

The wing pose-position estimation at time t is obtained by solving Equation (27). By comparing with the theoretical pose-position, the wing pose-position deviation $\Delta {f_a}\left( t \right) = {\left( {\Delta {x_a}{\rm{ }}\Delta {y_a}{\rm{ }}\Delta {z_a}{\rm{ }}\Delta {\varphi _a}{\rm{ }}\Delta {\theta _a}{\rm{ }}\Delta {\psi _a}} \right)^{\rm{{\rm T}}}}$ at time t is obtained. The pose adjustment error correction value $\Delta p_i^a\left( t \right)$ of the positioner generated within $\Delta t$ time is obtained by $\Delta {f_a}\left( t \right)$ combined with the inverse kinematics solution:

(42) \begin{align}\mathop {\Delta \boldsymbol{p}}\nolimits_i^a (t) = \Delta \boldsymbol{R}(t)\mathop p\nolimits_o^a + \Delta \boldsymbol\varGamma (t)\end{align}

Where $\Delta {\boldsymbol{R}}\left( t \right) = \left[ {\begin{array}{c@{\quad}c@{\quad}c}{{\mathop{\rm c}\nolimits} \Delta {\theta _t}{\mathop{\rm c}\nolimits} \Delta {\psi _t}} & {{\mathop{\rm s}\nolimits} \Delta {\varphi _t}c\Delta {\theta _t}s\Delta {\psi _t} - c\Delta {\varphi _t}s\Delta {\theta _t}} & {{\mathop{\rm c}\nolimits} \Delta {\varphi _t}{\mathop{\rm c}\nolimits} \Delta {\theta _t}{\mathop{\rm s}\nolimits} \Delta {\psi _t} + {\mathop{\rm s}\nolimits} \Delta {\varphi _t}{\mathop{\rm s}\nolimits} \Delta {\theta _t}}\\{{\mathop{\rm s}\nolimits} \Delta {\theta _t}{\mathop{\rm c}\nolimits} \Delta {\psi _t}} & {{\mathop{\rm s}\nolimits} \Delta {\varphi _t}{\mathop{\rm s}\nolimits} \Delta {\theta _t}s\Delta {\psi _t} + c\Delta {\varphi _t}{\mathop{\rm c}\nolimits} \Delta {\theta _t}} & {{\mathop{\rm c}\nolimits} \Delta {\varphi _t}{\mathop{\rm s}\nolimits} \Delta {\theta _t}{\mathop{\rm s}\nolimits} \Delta {\psi _t} - {\mathop{\rm s}\nolimits} \Delta {\varphi _t}{\mathop{\rm c}\nolimits} \Delta {\theta _t}}\\{ - {\mathop{\rm s}\nolimits} \Delta {\psi _t}} & {{\mathop{\rm s}\nolimits} \Delta {\varphi _t}{\mathop{\rm c}\nolimits} \Delta {\psi _t}} & {{\mathop{\rm c}\nolimits} \Delta {\varphi _t}{\mathop{\rm c}\nolimits} \Delta {\psi _t}}\end{array}} \right]$ ; $p_o^a$ is the position of the centre of the locator ball; $\Delta {\boldsymbol{\varGamma }}\left( {{t_k}} \right) = {\left[ {\begin{array}{c@{\quad}c@{\quad}c}{\Delta x\left( t \right)} {\Delta y\left( t \right)} & {\Delta z\left( t \right)}\end{array}} \right]^{\rm{T}}}$ .

Step (9): Evaluate if the fork-ear docking assembly is complete. If condition ${\rm{ }}t \geqslant T$ indicates completion, the process ends. Otherwise, under condition ${\rm{ }}t \lt T$ signaling that the assembly is not finished, continue measuring the coordinates of the fork-ear feature points and revert to Step (2).

5.0 Platform construction and experimental verification of the fork-ear docking assembly quality detection system

5.1 The platform construction of the fork-ear docking assembly quality detection system

The platform construction of the fork-ear docking assembly quality detection system includes a wing-fuselage docking experimental platform, a laser tracker, three industrial cameras, a built-in POE Gigabit industrial camera image acquisition multi-network port network card and a computer that can be equipped with a built-in industrial camera image acquisition multi-network port network card, as shown in Fig. 8.

Taking a certain type of aircraft as an example, the range for the deviation of the fork-ear mating position is set to be $ \pm 0.05$ mm, and the attitude angle deviation is set to be $ \pm 0.1$ °. In the process of wing-fuselage assembly, ${{\boldsymbol{\varTheta }}_{win}}\left( t \right)$ is set to $diag\left\{ {{{0.06}^2}{\rm{; }}\;{{0.06}^2}{\rm{; }}\;{{0.06}^2}} \right\}{\rm{m}}{{\rm{m}}^2}$ . Since all camera models in the multi-camera system are the same, the detection of ${{\boldsymbol{\varTheta }}_c}\left( t \right)$ is set to, $diag\left\{ {{{0.04}^2}{\rm{; }}\;{{0.04}^2}{\rm{; }}\;{{0.04}^2}} \right\}{\rm{m}}{{\rm{m}}^2}$ while the laser tracker detects ${{\boldsymbol{\varTheta }}_{las}}\left( t \right)$ as $diag\left\{ {{{0.02}^2}{\rm{; }}\;{{0.02}^2}{\rm{; }}\;{{0.02}^2}} \right\}{\rm{m}}{{\rm{m}}^2}$ .

The EDLines line detection effect is shown in Fig. 9.

Figure 10. Detection effect of EDCircles.

The EDCircles circle detection effect is shown in Fig. 10.

5.2 Analysis of docking assembly quality detection effect of fork-ear based on multi-camera stereo vision

The experiment was conducted using the 5.1 system platform. In terms of performance, the average processing time for multi-camera stereo vision detection was 828 ms, with a peak value of 876 ms, satisfying the criteria for real-time detection of wing-fuselage docking assembly. In the case of AWFA correction, for the LAFSA-PF sampling, the particles were set to 5, 10 and 15 times the number of artificial fish in the local neighbourhood, resulting in sample sizes of 30, 60 and 90 respectively. The minimum visual for LAFSA-PF was set to Visual min = 0.1 mm, the minimum step was Step min = 0.001 mm, and the crowding factor was established at δ = 0.3. When examining the impact of the number of particles on error correction outcomes, both the maximum iteration number m max and the maximum optimisation number try max were set at 50, as depicted in Fig. 11, Table 1.

Figure 12 and Table 2 show that, under the condition of consistent maximum number of iterations, the accuracy of error correction increases with the increase of the number of particles. After reaching 60, the accuracy basically no longer improves. However, as the number of particles increases, the running time will also increase. In order to improve the correction accuracy and ensure real-time performance, the number of particles is selected as 60, and experiments are continued to verify the influence of the number of iterations on the error correction accuracy. The iteration times are taken as 10, 20, 30, 40 and 50, respectively. In order to reduce the inconvenience caused by too many intersections of lines in the image, the absolute values of the fork-ear docking assembly deviations are compared.

Analysis shows that, with the number of particles set to 60, the accuracy of error correction increases as the number of iterations goes up. However, after 30 iterations, this accuracy plateaus and no significant improvements are observed. While increasing iterations can enhance pose correction accuracy, it also risks excessively prolonged running times. To balance correction accuracy and real-time performance, the maximum number of iterations is set to m max = 30 and the maximum number of optimisation times to try max = 30. The average time required for tracking estimation is 566 ms, and the peak value is 606 ms, satisfying the criteria for real-time detection of wing-fuselage docking assembly. Therefore, the filtering correction time interval is set to 1 s.

The wing pose-position adjustment time is set to 60 s. dual-camera and multi-camera are used to perform 30 wing-fuselage alignment experiments on the experimental platform of Fig. 7, and the deviation values of each spatial pose feature of the fork ear after the alignment are detected and recorded. By comparing the detection results of the dual-camera and the multi-camera detection fork ear pose-position without filtering, it is verified whether the detection effect of the multi-camera is improved.

Figure 13 shows that, under the condition of non-filtering, the direction deviation range of the two cameras detection fork-ear is −0.118 mm ∼ 0.141 mm, the multi-camera detection fork-ear is −0.073 mm ∼ 0.071 mm; the pose-angle deviation range of the two cameras detection fork-ear is −0.3046 °∼0.3845°, the multi-camera detection fork-ear is −0.2017 °∼0.2057 °. Therefore, the multi-camera has smaller direction deviation and higher pose-angle accuracy than the two cameras for fork-ear docking assembly.

Table 1. Comparison of different number of particles

Figure 11. Influence of the number of particles on the error correction results: (a) xg direction deviation, (b) yg direction deviation, (c) zg direction deviation, (d) pitch-angle deviation $\varphi $ , (e) rolling-angle deviation $\theta $ , and (f) deflection-angle deviation $\psi $ .

Table 2. Comparison of different number of iterations

Figure 12. Influence of the number of iterations on the error correction results: (a) xg direction deviation, (b) yg direction deviation, (c) zg direction deviation, (d) pitch-angle deviation $\varphi $ , (e) rolling-angle deviation $\theta $ , and (f) deflection-angle deviation $\psi $ .

Figure 14 shows that, under the condition of the inspection of multi-camera, the direction deviation range of the detection fork-ear alignment position without correction for AWFA and LAFSA-PF is −0.078 mm ∼ 0.065 mm, in the case of correction only for AWFA is −0.041 mm ∼ 0.041 mm, in the case where corrections are made only for LAFSA-PF is −0.047 mm ∼ 0.057 mm, and after comprehensive correction with AWFA and LAFSA-PF is -0.032 mm ∼ 0.037 mm. The range of deviation for the fork ear attitude angle without correction for AWFA and LAFSA-PF is from −0.1845° ∼ 0.1815°, in the case of correction only for AWFA is −0.1035° ∼ 0.1016°, in the case where corrections are made only for LAFSA-PF is −0.1099° ∼ 0.1208°, and after comprehensive correction with AWFA and LAFSA-PF is −0.0915° ∼ 0.0963°. Therefore, the direction deviation is effectively reduced after AWFA and LAFSA-PF correction, and the detection accuracy of pose-angle is effectively improved, additionally, AWFA provides better correction results than LAFSA-PF.

Figure 13. Comparison of inspection results between dual-camera and multi-camera: (a) xg direction deviation, (b) yg direction deviation, (c) zg direction deviation, (d) pitch-angle deviation $\varphi $ , (e) rolling-angle deviation $\theta $ , and (f) deflection-angle deviation $\psi $ .

Figure 14. Comparison of inspection results: None, LAFSA-PF,AWFA, AWFA&LAFSA-PF: (a) xg direction deviation, (b) yg direction deviation, (c) zg direction deviation, (d) pitch-angle deviation $\varphi $ , (e) rolling-angle deviation $\theta $ , and (f) deflection-angle deviation $\psi $ .

6.0 Conclusions

  1. (1) The spatial pose-position determination model of the fork-ear docking assembly based on multi-camera stereo vision is proposed. As a result, the direction deviation range of the fork-ear docking assembly is reduced from −0.118 mm ∼ 0.141 mm to −0.073 mm ∼ 0.071 mm and the pose-angle deviation range is reduced from −0.3046 °∼0.3845° to −0.2017 °∼0.2057 °.

  2. (2) The wing pose-position fine-tuning error correction model based on AWFA and LAFSA-PF is constructed. The accuracy of direction deviation detection is further improved to -0.032 mm ∼ 0.037 mm, and the accuracy of pose-angle deviation detection is further improved to −0.0915 °∼0.0963 °.

  3. (3) The method addresses the issues of incomplete identification of fork-ear pose-position in monocular vision and dual-camera, as well as the inaccuracy in detecting deviation in the intersection holes position of the fork-ear. Furthermore, this method provides certain reference value for multi-fork-ear style wing-fuselage docking.

Acknowledgements

This study was co-supported by National Natural Science Foundation of China [52465060], Aeronautical Science Foundation of China [2024M050056002] and Key Research and Development Plan Project of Jiangxi Province [20243BBG71004].

References

Mei, Z. and Maropoulos, P.G. Review of the application of flexible, measurement-assisted assembly technology in aircraft manufacturing, Proc IME B J Eng Manuf, 2014, 228, (10), pp 1185-1197. https://doi.org/10.1177/0954405413517387 CrossRefGoogle Scholar
Gai, Y., Zhang, J., Guo, J., Shi, X., Wu, D., Chen, K. Construction and uncertainty evaluation of large-scale measurement system of laser trackers in aircraft assembly, Measurement, 2020, 165, pp 108144. https://doi.org/10.1016/j.measurement.2020.108144 CrossRefGoogle Scholar
Wang, Y., Liu, Y., Chen, H., Xie, Q., Zhang, K., Wang, J. Combined Measurement Based Wing-Fuselage Assembly Coordination via Multiconstraint Optimization, IEEE T Instrum Meas, 2022, 71, pp 116. https://doi.org/10.1109/TIM.2022.3186675 Google Scholar
Trabasso, L. and Mosqueira, G. Light automation for aircraft fuselage assembly, The Aeronaut J, 2020, 124, (1272), pp 216236. https://doi.org/10.1017/aer.2019.117 CrossRefGoogle Scholar
Mei, B., Wang, H., Zhu, W. Pose and shape error control in automated machining of fastener holes for composite/metal wing-box assembly, J Manuf Process, 2021, 66, pp 101114. https://doi.org/10.1016/j.jmapro.2021.03.052 CrossRefGoogle Scholar
Maropoulos, P.G., Muelaner, J.E., Summers, M.D., Martin, O.C. A new paradigm in large-scale assembly-research priorities in measurement assisted assembly, Int J Adv Manuf Tech, 2014, 70, pp 621633. https://doi.org/10.1007/s00170-013-5283-4 CrossRefGoogle Scholar
Zhang, Q., Zheng, S., Yu, C., Wang, Q., Ke, Y. Digital thread-based modeling of digital twin framework for the aircraft assembly system, J Manuf Syst, 2022, 65, pp 406420. https://doi.org/10.1016/j.jmsy.2022.10.004 CrossRefGoogle Scholar
Yu, H., Du, F. A multi-constraints based pose coordination model for large volume components assembly, Chin J Aeronaut, 2020, 33, (04), pp 13291337. https://doi.org/10.1016/j.cja.2019.03.043 Google Scholar
Wu, D. and Du, F. A heuristic cabin-type component alignment method based on multi-source data fusion, Chin J Aeronaut, 2020; 33, (08), pp 22422256. https://doi.org/10.1016/j.cja.2019.11.008 Google Scholar
Wei, W., Jiang, C., Xiao, W. Design and simulation of fuselage product digital assembly based on DELMIA, Procedia CIRP, 2023, 119, pp 438443. https://doi.org/10.1016/j.procir.2023.03.106 CrossRefGoogle Scholar
Cui, Z. and Du, F. Assessment of large-scale assembly coordination based on pose feasible space, Int J Adv Manuf Tech, 2019, 104, pp 44654474. https://10.1007/s00170-019-04307-8 CrossRefGoogle Scholar
Cui, Z. and Du, F. A coordination space model for assemblability analysis and optimization during measurement-assisted large-scale assembly, Applied Sciences, 2020, 10, (9), pp 3331. https://doi.org/10.3390/app10093331 CrossRefGoogle Scholar
Zheng, Q., Zhao, P., Zhang, D., Wang, H. MR-DCAE: Manifold regularization-based deep convolutional autoencoder for unauthorized broadcasting identification, International Journal of Intelligent Systems, 2021, 36, (12), pp 72047238. https://doi.org/10.1002/int.22586 CrossRefGoogle Scholar
Zheng, Q., Zhao, P., Wang, H., Elhanashi, A., Saponara, S. Fine-grained modulation classification using multi-scale radio transformer with dual-channel representation. IEEE Communications Letters, 2022, 26, (6), pp 12981302. https://doi.org/10.1109/LCOMM.2022.3145647 CrossRefGoogle Scholar
Zheng, Q., Tian, X., Yu, Z., Jiang, N., Elhanashi, A., Saponara, S., Yu, R. Application of wavelet-packet transform driven deep learning method in PM2. 5 concentration prediction: A case study of Qingdao, China. Sustainable Cities and Society, 2023, 92, pp 104486. https://doi.org/10.1016/j.scs.2023.104486 CrossRefGoogle Scholar
Zheng, Q., Tian, X., Yu, Z., Wang, H., Elhanashi, A., Saponara, S. DL-PR: Generalized automatic modulation classification method based on deep learning with priori regularization. Eng Appl Artif Intel, 2023, 122, pp106082. https://doi.org/10.1016/j.engappai.2023.106082 CrossRefGoogle Scholar
Li, G., Huang, X., Li, S. A novel circular points-based self-calibration method for a camera’s intrinsic parameters using RANSAC, Meas Sci Technol, 2019, 30, (5), pp 055005. http://doi.org/10.1088/1361-6501/ab09c0 CrossRefGoogle Scholar
Zhu, Y., Zhang, W., Deng, Z., Liu, C. Dynamic synthesis correction of deviation for aircraft wing-fuselage docking assembly based on laser tracker and machine vision, Journal of Mechanical Engineering, 2019, 55, (24), pp 187196. http://doi.org/10.3901/JME.2019.24.187 Google Scholar
Zha, Q., Zhu, Y., Zhang, W. Visual and automatic wing-fuselage docking based on data fusion of heterogeneous measuring equipments, J Chin Inst of Eng, 2021, 44, (8), pp 792802. https://doi.org/10.1080/02533839.2021.1978324 CrossRefGoogle Scholar
Wang, Z., Zhang, K., Chen, Y., Luo, Z., Zheng, J. A real-time weld line detection for derusting wall-climbing robot using dual cameras, J Manuf Process, 2017, 27, pp 7686. https://doi.org/10.1016/j.jmapro.2017.04.002 CrossRefGoogle Scholar
Yang, L., Dong, K., Ding, Y., Brighton, J., Zhan, Z., Zhao, Y. Recognition of visual-related non-driving activities using a dual-camera monitoring system, Pattern Recognition, 2021, 116, pp 107955. https://doi.org/10.1016/j.patcog.2021.107955 CrossRefGoogle Scholar
Xu, J., Bo, C., Wang, D. A novel multi-target multi-camera tracking approach based on feature grouping. Comput Electr Eng, 2021, 92, pp 107153. https://doi.org/10.1016/j.compeleceng.2021.107153 CrossRefGoogle Scholar
Rameau, F., Park, J., Bailo, O., Kweon, I. MC-Calib: A generic and robust calibration toolbox for multi-camera systems. Comput Vis and Image Und, 2022, 217, pp 103353. https://doi.org/10.1016/j.cviu.2021.103353 CrossRefGoogle Scholar
Liu, X., Tian, J., Kuang, H., Ma, X. A Stereo Calibration Method of Multi-Camera Based on Circular Calibration Board, Electronics, 2022 11, (4), pp 627. https://doi.org/10.3390/electronics11040627 CrossRefGoogle Scholar
Peng, J., Xu, W., Yuan, H. An Efficient Pose Measurement Method of a Space Non-Cooperative Target Based on Stereo Vision, IEEE Access, 2017, 5, pp 2234422362. https://doi.org/10.1109/access.2017.2759798 CrossRefGoogle Scholar
Liu, Y., Xie, Z., Zhang, Q., Zhao, X., Liu, H. A new approach for the estimation of non-cooperative satellites based on circular feature extraction, Robot Auton Syst, 2020, 129, pp 103532. https://doi.org/10.1016/j.robot.2020.103532 CrossRefGoogle Scholar
Karlgaard, C.D. and Schaub, H. Nonsingular attitude filtering using modified Rodrigues parameters, J Astronaut Sci, 2009, 57, (4), pp 777791. https://doi.org/10.1007/BF03321529 CrossRefGoogle Scholar
Amrr, S.M., Nabi, M.U., Iqbal, A. An event-triggered robust attitude control of flexible spacecraft with modified rodrigues parameters under limited communication, IEEE Access, 2019, 7, pp 9319893211. https://doi.org/10.1109/ACCESS.2019.2927616 CrossRefGoogle Scholar
Zhang, C., Guo, C., Zhang, D. Data fusion based on adaptive interacting multiple model for GPS/INS integrated navigation system, Applied Sciences, 2018, 8, (9), pp 1682. https://doi.org/10.3390/app8091682 CrossRefGoogle Scholar
Cong, Q. and Yu, W. Integrated soft sensor with wavelet neural network and adaptive weighted fusion for water quality estimation in wastewater treatment process, Measurement, 2018, 124, pp 436446. https://doi.org/10.1016/j.measurement.2018.01.001 CrossRefGoogle Scholar
Akinlar, C. and Topal, C. EDLines: A real-time line segment detector with a false detection control, Pattern Recogn Lett, 2011, 32, (13), pp 16331642. https://doi.org/10.1016/j.patrec.2011.06.001 CrossRefGoogle Scholar
Akinlar, C. and Topal, C. EDCircles: A real-time circle detector with a false detection control, Pattern Recogn Lett, 2013, 46, (3), pp 725740. https://doi.org/10.1016/j.patcog.2012.09.020 CrossRefGoogle Scholar
Figure 0

Figure 1. Construction of fork-ear type wing-fuselage assembly quality inspection system.

Figure 1

Figure 2. The multi-camera stereo vision system.

Figure 2

Figure 3. The detailed content of the fork-ear assembly quality inspection.

Figure 3

Figure 4. The general layout of the multi-camera stereo vision system.

Figure 4

Figure 5. The spatial position detection coordinate system of fork-ear feature points.

Figure 5

Figure 6. The spatial circular projection of the intersection holes of the fork-ear.

Figure 6

Figure 7. Error correction process based on LAFSA-PF and AWFA.

Figure 7

Figure 8. The platform construction of the fork-ear docking assembly quality detection system.

Figure 8

Figure 9. Detection effect of EDLines.

Figure 9

Figure 10. Detection effect of EDCircles.

Figure 10

Table 1. Comparison of different number of particles

Figure 11

Figure 11. Influence of the number of particles on the error correction results: (a) xg direction deviation, (b) yg direction deviation, (c) zg direction deviation, (d) pitch-angle deviation $\varphi $, (e) rolling-angle deviation $\theta $, and (f) deflection-angle deviation $\psi $.

Figure 12

Table 2. Comparison of different number of iterations

Figure 13

Figure 12. Influence of the number of iterations on the error correction results: (a) xg direction deviation, (b) yg direction deviation, (c) zg direction deviation, (d) pitch-angle deviation $\varphi $, (e) rolling-angle deviation $\theta $, and (f) deflection-angle deviation$\psi $.

Figure 14

Figure 13. Comparison of inspection results between dual-camera and multi-camera: (a) xg direction deviation, (b) yg direction deviation, (c) zg direction deviation, (d) pitch-angle deviation $\varphi $, (e) rolling-angle deviation $\theta $, and (f) deflection-angle deviation $\psi $.

Figure 15

Figure 14. Comparison of inspection results: None, LAFSA-PF,AWFA, AWFA&LAFSA-PF: (a) xg direction deviation, (b) yg direction deviation, (c) zg direction deviation, (d) pitch-angle deviation $\varphi $, (e) rolling-angle deviation $\theta $, and (f) deflection-angle deviation $\psi $.