International Journal of Computational Intelligence Systems

Volume 14, Issue 1, 2021, Pages 358 - 366

A Textural Distributions-Based Detection of Hazelnut Axial Direction

Authors
Wenju Zhou1, Fulong Yao1, *, ORCID, Songyu Luan2, Lili Wang2, Johnkennedy Chinedu Ndubuisi1, Xian Wu1, ORCID
1School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200072, China
2School of Information and Electrical Engineering, LuDong University, Yantai, 264025, China
*Corresponding author. Email: yaofl@shu.edu.cn
Corresponding Author
Fulong Yao
Received 15 July 2020, Accepted 24 November 2020, Available Online 11 December 2020.
DOI
10.2991/ijcis.d.201207.002How to use a DOI?
Keywords
Discrete paths; Projection histogram; Adaptive circle; Template matching
Abstract

Since the shell of the hazelnuts is very hard, it is necessary to squeeze the slit in the axial direction to facilitate peeling. When the hazelnuts are automatically processed in the industry chain, the axial of the hazelnuts needs to be quickly positioned. In this paper, the sum of projection gradient in the sensitive area of the hazelnut image is used to locate the hazelnuts axial. Firstly, a search template with discrete paths is established to find the hazelnut contour and extract the hazelnut region from the image. Secondly, the sensitive area is selected to get projection histogram at diffident orientations, and then the gradient sums of the projection histograms are calculated. Thirdly, the axial orientation of the hazelnut is determined with the biggest sum. The experiments results show that the projection gradient sum method is fast enough and can meet the requirements of industrial production. The location accuracy of the projection gradient sum method is 94.2%.

Copyright
© 2021 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

1. INTRODUCTION

Hazelnut is one of the best favorite snacks for consumers with high nutritional value and good taste. Presently, the mechanical crack is often adopted to process the hazelnut for food. To get a better crack effect, hazelnuts needs to be manually located on the production line so that it can be cracked along its axial orientation. The manual operation has two disadvantages: low locating efficiency and high labor costs. At present, the technology of machine vision and mechanical control are usually combined in the production line to improve the intelligence of the production and save labor costs.

The application of machine vision inspection technology has been widely studied in many fields, such as electronics [1,2], metallurgical [3,4], pharmaceutical [5], agricultural [6], fabric [7], and rail traffic [8,9]. The recognition technology based on the texture features have been researched by many scholars. Chen et al. adopted Bayesian data fusion technology to achieve automatic detection of metal surface cracks [10], Dutta et al. used texture analysis and supports vector regression to perform on-line tool prediction of flank wear from machined surface images [11], Hu used the optimal elliptical Gabor filter to automatically detect the defects in texture surface [12], Yuan and Sun proposed fingerprint texture detection based on histogram of oriented gradient [13], Fan et al. used texture regression via conformal mapping to locate the 3D facial landmark [14], Miao et al. studied on asphalt pavement surface texture degradation using 3D image processing techniques and entropy theory [15], Susan and Sharma used Gaussian mixture entropy model to detect texture automatically [16], Teo and Abdullah used local texture analysis and Support vector machine (SVM) to detect solar cell micro-crack [17], Yang et al. proposed a method for detecting egg stains based on local texture feature clustering [18].

With the development of big data technology, machine learning has been widely used in vision system and laser imaging. Arun and Nath proposed a machine learning method for automated cashew grading system using external features like color, texture, size, and shape [19]. Woźniak and Połap used hybrid models of neural network and heuristic techniques to detect fruit peel defects [20]. In Castaño et al. [21,22], several artificial intelligence-based methods for obstacle recognition libraries are designed and applied using data collected by LiDAR sensors. In addition, many deep learning models have been successfully applied to fruit detection and location [23].

In the previous works, some methods related to the task of hazelnut detection have been proposed. Onaran et al. proposed a method to detect empty hazelnuts based on the analysis of the acoustic signal generated from the impact of the nut on a steel plate [24]. Khosa and Pasero proposed an X-Ray-based system for the classification of healthy and unhealthy hazelnuts [25]. In Caro et al. [26], a method for the inline quality evaluation of the inshell hazelnuts, based on Time-domain nuclear magnetic resonance (TD-NMR) analysis, has been proposed. However, these sensing systems are expensive and require a high computational cost.

As stated above, machine vision can be used for hazelnut axis location. It is noteworthy that although machine learning methods perform well in many tasks of machine vision, large amounts of data are needed to ensure the generalization performance [27]. It is clearly foreseen that the performance of machine learning will be greatly reduced caused by the lack of available hazelnut samples. In addition, due to the complexity of these models, higher requirements are put forward for the supported hardware [28]. In comparison with machine learning methods, the method of hazelnut axis location based on surface texture features is promising. The surface textures of the hazelnut are random and variable because of its nature which puts high requirement on the robustness of the location algorithm.

To this end, our main contributions are listed as follows:

  1. A fast extraction method of the hazelnut image based on the radially discrete search paths is proposed.

  2. A novel method is proposed to locate hazelnuts axial orientation based on the texture characteristics of hazelnuts.

  3. In order to improve the robustness and efficiency, an adaptive circle is used to select the appropriate projection area.

The rest of this article is organized as follows. Section 2 explains the extraction and binarization of the hazelnut image. Section 3 shows the method of projection gradient statistics. Section 4 discusses the proposed method. A brief conclusion is finally given in Section 5.

2. THE EXTRACTION AND BINARIZATION OF THE HAZELNUT IMAGE

As shown in Figure 1a, the color of hazelnut placed on the production line is different from the background area. The image of the hazelnut is collected by high-precision industrial camera. According to the different brightness values of the hazelnut image and the background area, a fast extraction method of the hazelnut image based on the radially discrete search paths is proposed as following.

  1. The average pixel value of the hazelnut image and its background area, Ein and Eextra, are calculated, respectively. The mathematical expressions [29] are given in Equations (1) and (2).

    Eextra=i=1mϕ(i)m(1)
    Ein=i=1qϕ(i)q(2)

    Where m denotes the number of pixels in the selected background area, q is the number of pixels in the selected image area of the hazelnut, and ϕ is the pixel value.

  2. A search template based on radially discrete paths is established. The center of the image is considered as the origin point as shown in Figure 1b. The coordinates and storage forms of the points lies on the radial search paths include:

    Firstly, the coordinates of the points lie on the radial search paths are derived from the following formulas:

    xj=xj-1+cosπ180×D(3)
    yj=yj-1+sinπ180×D(4)
    D=360n*i(5)

    Where D is the angle of each search path, n is the number of search paths, i represents the serial number of the radial path, (xj,yj) is the pixel coordinate.

    Secondly, the search template is characterized by a three-dimensional array P[i,j,k], where i is the serial number of the search path, i=1,2n; j is the pixel number on the i-th search path; k is the coordinate label of the j-th pixel point on the i-th search path. When k=0, P[i,j,k] represents the abscissa value of the j-th pixel point on the i-th search path, that is P[i,j,0]=xj; when k=1, P[i,j,k] represents the ordinate value of the j-th pixel point on the i-th search path, that is P[i,j,1]=yj.

    Thirdly, the hazelnut image is detected with this template, and the search is performed along each radially discrete path to find the points lies on the hazelnut edge. The search formula is:

    Tj=argMin|ϕj+1Eextra|+|ϕj1Ein|argMax|ϕj+1ϕj|+|ϕjϕj1|(6)

    Where Tj represent the pixel number of hazelnut edge point Mxj,yj which lies on certain search path. The coordinates of searched edge point Mxj,yj are saved in the array Q[p,k], p is the serial number of Mxj,yj, k is the coordinate label of Mxj,yj, Q[p,1]=yj and Q[p,0]=xj of Mxj,yj. The hazelnut edge is plotted by the joining up all the searched edge points into a line, and the hazelnut image is extracted as shown in Figure 1c.

  3. In order to observe the surface texture of the hazelnut more clearly, a binary filtering algorithm based on texture characteristics is used. As shown in Figure 2, the adjacent three 3×3 pixel matrix blocks are compared with target pixel block (the first pixel block), so the parameters of D12,D13,D14,Dsum were obtained as showed in Equations (710). If Dsum is greater than the set threshold T, the value of the central pixel of the target pixel matrix block is set to 0 [30], otherwise it is set to 255 as showed in Equation (11).

    D12=|i=19I1ii=19I2i|(7)
    D13=|i=19I1ii=19I3i|(8)
    D14=|i=19I1ii=19I4i|(9)
    Dsum=D12+D13+D14(10)
    I15=0,Dsum>T255,DsumT(11)

    Where I1i I2i I3i I4i represent the pixel from matrix blocks I1 I2 I3 I4, respectively. The subscript includes two parts, the front one represents the matrix block index, and the last one denotes the pixel index. D12, D13, D14 represents the differences between two selected blocks.

    The hazelnut image, as shown in Figure 1c, is binarized processed with the above adjacent 3×3 pixel matrix blocks method, Figure 1d shows the binary image with clear texture.

Figure 1

Preprocessing of hazelnut images: (a) The original image, (b) Discrete path search template, (c) Hazelnut image extraction, (d) Binarization of the hazelnut image.

Figure 2

Binary diagram.

3. THE METHOD OF PROJECTION GRADIENT STATISTICS

Figure 3 shows the flow of the projection gradient statistics. After binarizing the hazelnut image, an adaptive circle is used to extract the projection area of the hazelnut. Then the projection gradient statistical method is used to detect the axial direction of the hazelnut.

Figure 3

Image processing flow.

3.1. The Extraction of Projection Area

It is necessary to select the appropriate projection area to improve the robustness and efficiency. An adaptive circle is used to select the central part of hazelnut image since the texture in this part is much clearer as showed in Figure 4. The center point of the adaptive circle is set on the center of gravity of the hazelnuts, the diameter of adaptive circle is analyzed to find the optimal value α×L, L is the length of the long axis of the hazelnut, α is the shrinkage ratio of the adaptive circle diameter to L. Table 1 shows the optimal α is 0.4.

  1. The center of gravity of the hazelnuts can be calculated according to the coordinate values of the hazelnuts image after binarization. The coordinate value x0,y0 of the center of gravity of the hazelnuts, is the average coordinates of the pixels with value 1 in the binary image.

  2. By searching for the coordinate points of the hazelnuts edge, the two coordinate points lie on the longest distance are obtained, defined as [x(a),y(a)], [x(b),y(b)]. The length of the two edge points which is indicated by L, is obtained by the Euclidean distance shown in Equation (12),

    L=(x(b)x(a))2+(y(b)y(a))2(12)

  3. The shrinkage ratio is defined as α. α×L is the adaptive circle diameter, and α determined the size of projection area. As shown in Figure 4, two groups of hazelnuts photos with different α are selected.

Figure 4

The extracted projection region with different α.

α=1 α=0.8 α=0.6 α=0.4 α=0.2
1 37°
2 17°
3 10°
4 20°
5 35°
6
7
8 18°
9
10 52°
Table 1

The angle deviation with different α.

3.2. Projection Histogram Statistical Calculation

For the extracted projection area of the hazelnuts, the projection process can be expressed by the following formula [31].

g(ρ,θ)=Df(x,y)δ(xcosθ+ysinθρ)dxdy(13)

Where g(ρ,θ) denotes the bins of the histogram, f(x,y) denotes the pixel density of the image at position (x,y), δ() is Dirac delta function, D represents the entire image.

Since the binarization image is discrete, Equation (13) can be rewritten to Equation (14), where f1(x,y) is binarization result of f(x,y),

g(ρ,θ)=12(x,y)s(1+s(f1(x,y))(hleftρ+hrightρ)(14)
hleftρ=(1+xcosθ+ysinθρ)δxcosθ+ysinθρ+1(15)
hrightρ=(1xcosθysinθ+ρ)δxcosθ+ysinθρ(16)

Where s(i)=1i=2551i=0, ρ0,N1 is the index of the bins, θ[0,180] is the rotation angle, S indicates the projection area, is floor round, which rounds the elements to the nearest integers toward minus infinity. hleftρ and hrightρ denote the contribution of the current bins based on the pixels of the left and right sides of ρ as shown in Figure 5.

Figure 5

Projective histogram demonstration.

Figure 5 shows a 3×3 pixel projection histogram with vertical projection. Among them, the pixels in projection diagram (a) are located on the projected coordinates, while many pixels on projection diagram (b) do not fall at the projected coordinates. Therefore, it is necessary for us to further process the points that do not fall on the projected coordinates. For example, point A is on the left side area of the vertical line ρ, the higher the value of hleftρ (namely point A is closer to ρ), the greater the impact on the ρ. In contrast, the lower the value of hleftρ (namely point A is far to ρ), the less the impact on the ρ. Point B is on the right side area of the vertical line ρ, the higher the value of hrightρ (namely point B is far to ρ), the less the impact on the ρ. In contrast, the lower the value of hrightρ (namely point B is closer to ρ), the great the impact on the ρ.

According to the rule shown in Equation (14), for a certain projection area, the image is rotated 180 times with a 1° rotation interval, so 180 projection histograms are obtained. Figure 6 shows several typical projection histograms.

Figure 6

Projection histograms with different rotation angle.

For the same area surrounded by the adaptive circle, different projection angles correspond to different projection histograms.

In order to find the relationship between projection histogram and the axial orientation of the hazelnuts, we calculate the gradient sum S of projection histogram.

Y is histogram sequence in a certain projection angle,

Y=y(1),y(2),y(i)(17)

ΔY is the difference sequence,

ΔY=Δy(1),Δy(2),Δy(i)(18)
Δy(i)=y(2)y(1),y(3)y(2)y(i)y(i1),i=2,3n(19)

The gradient sum S of projection histogram is defined as the sum of absolute values of Δyi,

S=i=2n|(yiyi1)|(20)

The gradient sums of 180 projection histograms S are calculated and the functional curve of S versus rotation angel is showed in Figure 7d. The calculated results give the rotation angel of the axial orientation of hazelnut when S reach its peak value. In this case, the axial angle is 33°.

1000 hazelnuts images are taken for axial recognition. As shown in Table 2, the accuracy of the hazelnut image without and with adaptive circle is 72% and 94% respectively. Therefore, the method using adaptive circle extraction greatly improves the recognition accuracy. Figure 8 shows the recognition results of different hazelnuts when S reach its peak value.

4. EXPERIMENTS AND ANALYSIS

In addition to the projection gradient statistics, we also designed another feasible algorithm. This section will introduce the implementation of the algorithm and the comparison of the two methods.

4.1. Template Matching Algorithm

Error Correct Accuracy (%)
Without extraction 280 720 72
Extraction with adaptive circle 60 940 94
Table 2

The accuracy of hazelnuts axial recognition.

Figure 7

The projection histogram and recognition result: (a) the projection area, (b) projection histogram, (c) the result of axial detection, (d) the gradient sum of projection histogram.

Figure 8

The recognition results of hazelnut axial orientation.

Figure 9 shows the diagram of the template matching method. After binarizing the hazelnut image, a rectangular stripe template is established based on the characteristics of the hazelnut image. Then the template is rotated to calculate the convolution results of template and detected images.

4.1.1. The establishment of rectangular stripe template

As we all know, only when hazelnut is in the long axis direction, the vertical gradient is the smallest, and the horizontal gradient is the largest, as shown in Figure 10a. Inspired by Prewitt operator [32], we devised a novel stripe templates to detect the horizontal gradient during rotation. Moreover, to improve the robustness of the stripe templates, we set the size of the template matrix to m×n (m and n are variable parameters, which can be adjusted according to the actual situation), template matrix is shown in Figure 10b. Obviously, while this stripe template is adopted, the convolution result can obtain the maximum value only the hazelnut rotates to the long axis direction.

As shown in Figure 10c, the black and the gray column corresponds to the black part and the white part of the binarized hazelnut images, respectively. In the template matrix as shown in Figure 10b, +1, −1, 0 is used to represent the value of black, the gray, and middle part in Figure 10c. The optimal length of row and column in the template matrix is decided by the convolution results as shown in Table 3. In this paper, the optimal length of template matrix row and column are 50 and 9, m=50, n=9. We calculate the convolution results with different templated matrix as shown in Table 3.

4.1.2. The discussion of optimal template size and axial recognition results

The rectangular stripe templates are obtained with 1° rotation interval, and parts of the template is showed in Figure 11.

The binarized hazelnut image is sequentially convoluted by the obtained 180 rectangular stripe templates as shown in Equation (21),

Hθ=[x,y]Ihθ[x,y](21)
hθ=|h[x,y]*gθ(x,y)|(22)
Hfitθ=Maxθ[1,180]Hθ(23)

Figure 9

Image processing diagram.

Figure 10

(a) The vertical and horizontal directions of hazelnuts, (b) Template matrix, (c) Rectangular stripe template.

Template Size 1 2 3 4 5 6 7 8 Average
5*40 10° 14°
9*40 16° 5.5°
15*40 21° 36° 65° 19° 31° 23.75°
5*50 17° 26° 21° 11.63°
9*50
15*50 15° 19° 10° 65° 25° 32° 21.75°
5*60 10° 11° 15° 6.75°
9*60 15° 5.5°
15*60 25° 15° 36° 14° 74° 23.4°
Table 3

The detected angle error of hazelnut axial direction with different template size.

Where gθ[x,y] is the obtained template matrix with different rotation angle θ, θ=1,2180, h(x,y) denotes the brightness value of the pixel point (x,y), I denotes the convolution area, Hfitθ is the maximal value of convolution calculation with different θ, fitθ obtained from the maximal convolution result, Hfitθ is the rotation angle of hazelnut axial direction.

Figure 11

Rectangular stripe template with different rotation angles.

Figure 12 shows rectangular stripe templates with different size, the template with size of 5*40 in the figure means that the length and width of the template are 5 and 40 pixel points, respectively. The convolution results show that optimal recognition accuracy is obtained when the template size is 9*50, as shown in Table 3. The average angle error of hazelnut axial direction is 1° when the template size is 9*50. Whereas the average angle error is up to 23.75° when the template size is 15*40. Figure 13 shows the recognition results of different hazelnuts when Hθ reach its peak value.

Figure 12

Rectangular stripe templates with different size.

Figure 13

The recognition results of hazelnut axial direction with template matching.

4.2. Comparison and Experiments

The hardware configurations of Personal computer (PC) used in the experiments is Intel Dual-Core 3.0 GHz CPU, 8G RAM, and the used calculation software is C#. 10 groups of hazelnut samples were randomly selected, and each group contained 100 hazelnut images with different poses. The axial direction of 1000 hazelnuts images were detected by the projection gradient statistics and template matching, and the recognition time of each group was recorded in Table 4. The time used by projection gradient statistics and template matching methods, is labeled with (Projection/Template) respectively. Form the Table 4, we know that the average time of projection gradient statistics and template matching methods are 45.4 ms and 89.6 ms, respectively. Compared with projection gradient statistics, the speed of template matching calculation is longer due to the repeated calculation of pixel points when setting step length. That is why we choose the projection gradient statistics method.

Label 1 2 3 4 5 6 7 8 9 10 Average
Template (ms) 87.7 88.6 94.3 90.7 86.5 85.2 88.1 92.1 93.2 90.3 89.6
Projection (ms) 46.7 48.6 44.3 45.7 46.8 45.6 47.1 44.1 43.2 42.3 45.4
Table 4

The comparison of time costs.

Apart from the time cost, we also studied the accuracy of the two methods. As shown in Table 5, 1000 hazelnuts images were detected in our experiments to confirm the recognition accuracy with two methods, the recognition accuracy are 88.6% and 94.2%, respectively. In summary, the two parameters recognition time and accuracy of the projection gradient statistics is much better than template matching method.

Error Correct Accuracy (%)
Projection gradient statistics 58 942 94.2
Template matching method 114 886 88.6
Table 5

The comparison of the recognition accuracy.

5. CONCLUSION

This paper proposes an innovative method to recognize the axial direction of hazelnut based on machine vision. The sum of the absolute values of each histogram gradients are calculated based on binarized hazelnut image. In addition, the method of template matching is discussed. In this method, a rectangular stripe template is established and the rotated template is convoluted by binarized hazelnut image. The recognition results of the two methods are compared with the parameter of recognition time and accuracy.

CONFLICT OF INTEREST

The authors declare that they have no competing interests.

AUTHORS' CONTRIBUTIONS

All authors contributed to the work. All authors read and approved the manuscript.

ACKNOWLEDGMENTS

This research is financially supported by the National Key R&D Program of China (No. 2019YFB1405500), Natural Science Foundation of China (61877065), and Liaoning Province Natural Science Foundation (No.2019-KF-23-08).

REFERENCES

24.I. Onaran, B. Dulek, T. C. Pearson, et al., Detection of empty hazelnuts from fully developed nuts by impact acoustics, in 2005 13th European Signal Processing Conference (Antalya, Turkey), 2005, pp. 1-4. https://www.researchgate.net/publication/252351354
30.R.J. Schalkoff, Digital Image Processing and Computer Vision, first, Wiley, New York, NY, USA, 1989. https://www.researchgate.net/publication/37406591
32.J.M. Prewitt, J. Prewitt, J.S. Prewitt, J.M.S. Prewitt, J.M. Prewitt, and J. Prewitt-Freilino, Object enhancement and extraction, Pict. Process. Psycho., Vol. 10, 1970, pp. 15-19. https://www.researchgate.net/publication/200132407
Journal
International Journal of Computational Intelligence Systems
Volume-Issue
14 - 1
Pages
358 - 366
Publication Date
2020/12/11
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.d.201207.002How to use a DOI?
Copyright
© 2021 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Wenju Zhou
AU  - Fulong Yao
AU  - Songyu Luan
AU  - Lili Wang
AU  - Johnkennedy Chinedu Ndubuisi
AU  - Xian Wu
PY  - 2020
DA  - 2020/12/11
TI  - A Textural Distributions-Based Detection of Hazelnut Axial Direction
JO  - International Journal of Computational Intelligence Systems
SP  - 358
EP  - 366
VL  - 14
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.d.201207.002
DO  - 10.2991/ijcis.d.201207.002
ID  - Zhou2020
ER  -