Figure 2
Example of embedded, paired group-contrast comparisons within Bemisia tabaci species complex puparial specimens. Upper figure: original images of two randomly selected specimens from the East Pacific Ocean image set (F30o1-7 and F30o2-17) and the Israel-Med image set (F18o1-36 and F18o2-15). Rather than simply comparing image vectors, and so being restricted to small sample sizes, the embedded image comparison protocol used in this investigation focused on pairwise comparisons across all images in the sample (e.g., six comparisons for this four-image example) with the structure of group similarities and differences being represented by Euclidean distances between image vectors. Lower figure: summary of all six comparisons possible for this four-image example in terms of difference images and image distance values ($d)$ for both the full-resolution ($500 \times 500$ pixel) and reduced resolution ($28 \times 28$ pixel) image comparisons, the latter values in parentheses. Here, images that exhibit a higher degree of difference will appear darker than images that exhibit a lower degree of difference, with particular regions and/or structures of distinction being represented as darker areas and/or structures represented by dark highlights. Note that, for both image resolutions, images belonging to the same group typically exhibit shorter image distances relative to those belonging to different groups. Under this approach to the representation of the image (= specimen) similarity structure CNN training focuses on constructing a set of convolution filters that maximize between-groups image distances and minimize within-groups image distances.

Example of embedded, paired group-contrast comparisons within Bemisia tabaci species complex puparial specimens. Upper figure: original images of two randomly selected specimens from the East Pacific Ocean image set (F30o1-7 and F30o2-17) and the Israel-Med image set (F18o1-36 and F18o2-15). Rather than simply comparing image vectors, and so being restricted to small sample sizes, the embedded image comparison protocol used in this investigation focused on pairwise comparisons across all images in the sample (e.g., six comparisons for this four-image example) with the structure of group similarities and differences being represented by Euclidean distances between image vectors. Lower figure: summary of all six comparisons possible for this four-image example in terms of difference images and image distance values (⁠|$d)$| for both the full-resolution (⁠|$500 \times 500$| pixel) and reduced resolution (⁠|$28 \times 28$| pixel) image comparisons, the latter values in parentheses. Here, images that exhibit a higher degree of difference will appear darker than images that exhibit a lower degree of difference, with particular regions and/or structures of distinction being represented as darker areas and/or structures represented by dark highlights. Note that, for both image resolutions, images belonging to the same group typically exhibit shorter image distances relative to those belonging to different groups. Under this approach to the representation of the image (= specimen) similarity structure CNN training focuses on constructing a set of convolution filters that maximize between-groups image distances and minimize within-groups image distances.

Close
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close

This PDF is available to Subscribers Only

View Article Abstract & Purchase Options

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Close