Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Cross-Component Transferable Transformer Pipeline Obeying Dynamic Seesaw for Rotating Machinery with Imbalanced Data
Previous Article in Journal
Assessing Spatiotemporal and Quality Alterations in Paretic Upper Limb Movements after Stroke in Routine Care: Proposal and Validation of a Protocol Using IMUs versus MoCap
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reflection Symmetry Detection in Earth Observation Data

Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(17), 7426; https://doi.org/10.3390/s23177426
Submission received: 19 July 2023 / Revised: 22 August 2023 / Accepted: 23 August 2023 / Published: 25 August 2023
(This article belongs to the Section Remote Sensors)

Abstract

:
The paper presents a new algorithm for reflection symmetry detection, which is specialized to detect maximal symmetric patterns in an Earth observation (EO) dataset. First, we stress the particularities that make symmetry detection in EO data different from detection in other geometric sets. The EO data acquisition cannot provide exact pairs of symmetric elements and, therefore, the approximate symmetry must be addressed, which is accomplished by voxelization. Besides this, the EO data symmetric patterns in the top view usually contain the most useful information for further processing and, thus, it suffices to detect symmetries with vertical symmetry planes. The algorithm first extracts the so-called interesting voxels and then finds symmetric pairs of line segments, separately for each horizontal voxel slice. The results with the same symmetry plane are then merged, first in individual slices and then through all the slices. The detected maximal symmetric patterns represent the so-called partial symmetries, which can be further processed to identify global and local symmetries. LiDAR datasets of six urban and natural attractions in Slovenia of different scales and in different voxel resolutions were analyzed in this paper, demonstrating high detection speed and quality of solutions.

1. Introduction

An object or a system (a scene is used for both from here on) is symmetric if there is a transformation, such as a translation, rotation, reflection, or a combination of these, that maps it onto itself [1]. The concept of symmetry may be generalized to various abstractions, but a great majority of studies and demonstrations, including the proposed paper, are focused on the visual symmetry perception, which is characterized by the detection and interpretation of colors and distances (depth and size). Symmetry perception has become an integral part of the individual’s perceptual organization process, where individual regularities have been selected on the basis of their relevance [2]. Such phenomena have been recognized in studies of mating and food choice habits of many animal species, including birds, dolphins, apes, and bees [3]. In humans, this concept can be generalized to a broader perception of aesthetics, safety and stability, which explains its role in different fields of human activity, including arts, architecture, biology, medicine, mathematics, and various engineering disciplines [3,4]. Further studies have demonstrated that symmetry is important for the visual system by facilitating image segmentation as well as object analysis and representation [5,6]. However, in contrast to the natural, almost self-evident symmetry perception processes in living beings, computer-aided symmetry detection is anything but simple [4]. Algorithms of image processing and computational geometry, optimization methods, and domain knowledge must be carefully designed and integrated to provide relevant, accurate, and timely solutions.
Earth observation data are huge sets of spatial or spatiotemporal samples acquired from satellites [7,8], airborne platforms [9,10], and in situ measurements on land, in water, and in the atmosphere [11]. When processed appropriately, including feature extraction, spatial and/or color (or other acquired/measured variable) segmentation, and object modeling and analysis, EO data provide a wide variety of new insights into Earth’s processes and systems [12]. Furthermore, diverse EO datasets can be fused with each other [11] or with other geospatial (GIS) data [12]. Symmetry can also be a very useful feature in this context, due to its already explained role in visual perception. However, although the use of symmetries in EO data processing has been growing rapidly over the last two decades, it is still not reaching its potential. Practically all the applications we found in the literature, such as [13,14,15,16,17,18,19,20], assume that the symmetries are known in advance, either being detected by some general algorithm or manually identified. They do not introduce any specialized symmetry detection algorithms, which would benefit from the nature of the EO data. Symmetry is used in 3D architecture, street or network of streets reconstruction [13,14,15], detection of roads [16], and extraction of railway elements [17]. Individual tree delineation from EO data, particularly LiDAR point clouds, is a popular research topic, but symmetry usually plays only a marginal role in the methods, mostly incorporated in a form of assumptions to supplement other means of segmentation [18,19]. A symmetry quality measure was recently used to predict water level in one of the largest intermittent lakes in Europe from Sentinel-2 data [20]. All these solutions are narrowly focused to scenes consisting of clearly distinguishable components and, except the last three, with smooth surfaces and rigorously controlled noise.
Symmetry is thus a transformation  f : S S , where S is the set (the scene  I n  or a part of it, where  I n  refers to the input set) invariant under f. We will use the term symmetry a bit inconsistently for both f and S. The symmetry detection is aimed at answering two different questions:
  • Is a considered scene symmetric? This is the global symmetry detection problem, which is solved by trying to find a transformation, which concerns the whole scene, i.e.,  S = I n .
  • Does the scene contain any (smaller) symmetric patterns? This is the partial symmetry detection, where the patterns with the property of being symmetric must be identified and extracted from the scene together with the attributes of the symmetry transformation, i.e.,  S I n .
The concept of the partial symmetry was used in this manner by Mitra et al. [21]. Other authors mostly consider a similar concept of the local symmetry instead, as discussed at the beginning of Section 2.
There are three basic symmetry types regarding the transformation function: reflection, rotational, and translational symmetry. We only consider detection of the reflection symmetry across a symmetry plane in 3D in a perpendicular direction (or across the symmetry axis in 2D). Note that other types of reflection symmetry exist where the mirroring transformation operates either across an arbitrary surface in 3D, across an arbitrary curve in 2D or 3D, or across a point in 1D, 2D, or 3D. The latter is known as central or radial symmetry. The mirroring direction need not be perpendicular.
In this paper, a new algorithm is presented, predominantly designed to detect partial reflection symmetries in Earth observation (EO) data. To the best of our knowledge, there are no symmetry detection algorithms tailored to the particularities of EO data. Existing applications of symmetries in EO data therefore typically use slower general algorithms or even manual symmetry labeling. The presented algorithm is required to detect multiple symmetries if they exist. The principle of maximality is employed to reduce their number. If there are two symmetries with the same symmetry plane [21], i.e.,  f : S 1 S 1  and  f : S 2 S 2 , and if  S 1 S 2 , then only  S 2  is accepted as the solution. If, for example, a global symmetry is found as a special case of the partial symmetry, then none of its subsets are considered as a symmetry. With a small extension, the algorithm is also able to identify the so-called local symmetries. Section 1 is concluded with a survey of previous works on reflection symmetry detection, with emphasis on partial symmetry. A part of the survey addresses symmetry detection and applications in processing EO data. Section 2 stresses first the particularities that make symmetry detection in EO data different from detection in other geometric sets. The structure of the novel algorithm and the detailed descriptions of its individual parts are given afterwards. Section 3 demonstrates the high detection speed and quality of solutions through visualizations and analyses of LiDAR datasets of six urban and natural attractions in Slovenia. The results, some minor additional functionalities, limitations, possible improvements, extensions, and applications are discussed in the concluding Section 4.

Related Works

In this subsection, we stress the main features and differences of the existing methods for global and local reflection symmetry detection, and their impact on the proposed approach. Mitra et al. [22] made an extensive survey, while some newer approaches were discussed in our related research work [4] recently.
The global reflection symmetry methods use a variety of different approaches to calculate the symmetry plane (or more of them) on the entire scene. Chen et al. [23] used the weighted primary component analysis (PCA) method to determine the initial symmetry plane, which was then refined iteratively by adjusting the weights. Schiebener et al. [24] determined the symmetry plane candidates by the RANSAC method. Combés et al. [25] and Ecins et al. [26] based their methods on an iterative nearest point search. Nagar and Raman [27] transformed the problem into the optimization problem on a smooth Riemannian product manifold. Hruda et al. [28] used a gradient-based optimization designed on their differential symmetry measure. Other concepts utilize edge detection proposed by Elawady et al. [29], multiple virtual viewpoints and their entropy distribution by Li et al. [30], heat diffusion by Sipiran et al. [31], and extended Gaussian images used by Sun and Sherrah [32] and Kakarala [33]. Some of the listed algorithms operate on 3D point clouds [23,24,25,26,27,28], some on triangular meshes [30,31,32], and some on both [33]. The method of Elawady et al. [29] is limited to raster images, which may be of interest for satellite images, aerial photographs, or orthophotos. In any case, from the point of view of the method introduced in this paper, the most interesting methods are those that process voxelized data. Korman et al. [34] proposed an algorithm with a scalable sampling rate, depending on the total variation of the shape. It also detects global rotational symmetries (such as [32]). Podolak et al. [35] first sampled the object’s surface by the Monte Carlo approach and embedded the samples into a voxel grid. All possible symmetry planes were then determined among the samples and evaluated by a symmetry measurement function. This method is particularly important because of its adaptation to local symmetry detection [36]. Unlike other described solutions, the method by Žalik et al. [4] rotates the candidate symmetry plane around the centroid and uses hierarchical space subdivision to identify good candidates.
Partial symmetry is not strictly and uniquely defined in the literature. Some authors use the term as a synonym for local symmetry, others for approximate global symmetry, and others for approximate symmetry in general. We use it as a superclass for both the global and local symmetry. The latter is sometimes handled by segmenting the scene into individual parts and then detecting global symmetry separately on each of them [24,26,31]. True local reflection symmetry detection methods were proposed in [21,36,37,38,39]. Simari et al. [37] and Calliere et al. [38] both proposed methods on triangular meshes. The former is based on the covariance matrix of weighted gravity centers, and the latter on the Hough transform. Mitra et al. [21] presented a powerful general method that handles global and local reflection and rotational symmetries in a point cloud. Curvature/Laplacian local shape descriptors are computed for each point, requiring that the points are topologically connected into the grid. The points with similar descriptors are paired. The pairs are clustered into symmetric patches, which are then connected in a graph. Hruda et al. [39] recently proposed the method, based on [21,28], which uses a simple PCA-based shape descriptor and does not require a grid. Speciale et al. [36] proposed two methods for voxelized scenes. The first one checks all the pairs of randomly selected voxels, as described in [35]. The second method marks the voxels with large enough gradients and curvatures as boundary voxels, pairs only these, and then additionally samples the pairs randomly to determine the candidates for symmetry planes.
Machine learning approaches are increasingly popular in this field as everywhere else, but they all require a rich training set and a relatively demanding and time-consuming learning process. Current solutions in 3D are limited to the global reflection symmetry either in point clouds [40,41] or voxel grids [42], while some solutions in raster images address local reflection symmetry as well [43].

2. The Method

The presented algorithm is predominantly designed to detect partial reflection symmetries in EO data. With some pre-processing and a few minor adjustments, other geospatial (GIS) data can also be processed, as we will explain in Section 4. An early version of the algorithm was presented in [44] already, but the method was upgraded with several improvements, speed-ups, and new functionalities since then. Much larger datasets with higher voxel resolutions, that were previously processed in tens of hours or even a few days, can be handled in few hours or even faster now. Before describing the structure of the method and detailing its building blocks, let us explain the concept of partial symmetry and the peculiarities of the EO data that had a significant impact on the design of the method.
Partial symmetry does not refer to a spatially and/or functionally distinguished part (component) of the system, but to an arbitrary subset of the scene, ranging from a symmetrical pair of two tiny details to the whole scene. On the other hand, local symmetry refers to a spatially distinct symmetrical region that does not contain other (out of the symmetry) points in its interior. Such an area may be, for example, a bounding box, a convex hull, a concave hull or, simply, a connected area, intersecting with a plane of symmetry. Different authors use different informal definitions. We use the last of the above, which could also be called a bilateral partial symmetry. Figure 1 demonstrates the difference between the global, partial, and local symmetries. A 2D input object from Figure 1a has two obvious symmetry axes: the horizontal and the vertical one. The former subdivides the object into two symmetric halves (Figure 1b), representing the global but also the partial and local symmetry. The principle of maximality assures that this is the only partial and local symmetry to be detected with regard to the horizontal axis. The vertical axis defines the maximal partial symmetry from Figure 1c and two maximal local (but not maximal partial) symmetries from Figure 1d,e.
EO data are mostly acquired downwards from satellites, airplanes, drones, etc. This means that the width and length of a considered geographic area are usually much greater than the range of altitudes. Similarly, horizontal distances between in situ environmental sensors are typically longer than the differences of their altitudes. Besides this, many more airborne and satellite EO data samples are collected from the visible top surfaces than from the side and bottom ones. Consequently, it is more likely to explore symmetries from above than from the side. We benefit from this by designing the method to detect only symmetries with nearly vertical symmetry planes. The current implementation reads LiDAR (light detection and ranging) point clouds stored in LAS files only. The symmetries in geometry (shape) are only detected in such data. However, geographical maps are also predominantly two-dimensional, so the above observation can be generalized to symmetry detection in geospatial data other than altitude, which will be explained in more detail in Section 4. Note that the geographical area considered should not be too large anyway, as the method uses Euclidean geometry, ignoring the influence of the Earth’s roundness and cartographic projection.
On account of the previous observation, we had reason to expect significant simplifications and speed-ups of the method, but the next feature of the EO data complicates the method somewhat. Because of the sampling, points of an “original” part and mirrored part rarely match exactly. As a result, the method must detect approximate symmetries instead of ideal exact ones, which is usually a much harder task. This topic is discussed further in Section 2.1.
The bottom-up approach is another crucial feature of the proposed algorithm. By limiting the solution space to vertical symmetry planes only, it suffices to detect symmetries in horizontal slices and then combine them on the basis of the common detected symmetry planes. This constraint crucially contributes to the affordable time complexity of the algorithm in the EO data domain, although the majority of the algorithm’s steps can be easily generalized when we can afford a slower execution. The processing in a single slice can be further decomposed into a search for the so-called basic symmetries, which are merged gradually on the same principles as the symmetries in different slices. The basic symmetry is the symmetry of two geometric primitives, such as points (voxels), line segments, or more complex structures. Our choice is line segments. The core idea is that each line segment which appears in some symmetry should have a symmetric pair (sibling) with the same length somewhere in the slice. The classification of points on the two sides of the symmetry plane is based on a simple vector product test.
A rough outline of the algorithm is presented below, and a more detailed explanation of the individual steps is given in the subsections that follow.
  • Voxelization.
  • Identification (and filtering) of material voxels.
  • Identification of interesting voxels.
  • For each horizontal slice of the voxel grid
    Identification of line segments and clustering due to their lengths.
    For each cluster of line segments
    *
    Identification of basic symmetries among pairs of line segments.
    *
    Merging symmetries.
    Merging symmetries from different clusters.
  • Merging symmetries from different slices.
  • For each detected (partial) symmetry S
    Insertion of “non-interesting” material voxels into S.
    Extending S by mirroring its voxels accurately.
  • Post-processing (eventual detection of global and local symmetries included here).

2.1. Voxelization

Voxelization is a space subdivision in all three coordinate directions, resulting in a uniform voxel grid. It is a common acceleration technique in geometric applications, since the arrangement of geometric elements, e.g., points, into the grid cells with constant access time often speeds up various searching operations. However, its main role in the proposed method is in enabling the approximate geometry and, consequently, the approximate symmetry detection. The idea is straightforward: all data samples (e.g., LiDAR points) within the same grid cell (voxel) are approximated with the voxel’s central point. The terms hard or full voxels are sometimes used, as the opposite to air, soft, or empty voxels. The number of the material voxels is usually lower than the number of the input samples, which may accelerate the method significantly. Besides this, the method can be accelerated further, as follows.
Theoretically, the symmetries should be identified among all the material voxels, separately for each horizontal slice of the voxel grid. But this would result in numerous trivial solutions (e.g., “infinitely” many symmetries could be found on a flat surface), so we further reduce the set of candidates for the symmetry detection by extracting the so-called interesting voxels.
A material voxel is considered interesting for further processing if its 3D surroundings, defined by adjacent material voxels, is not flat. We therefore test each material voxel against the pattern of 26 adjacent voxels and filter out the interesting ones which are not in the middle of a vertical, horizontal, or diagonally slanted local surface. Such interesting voxels represent the input for the next step of line segments identification. Figure 2a–i demonstrates the situations where a central voxel is not identified as interesting. Figure 2j shows an example of the interesting voxel.
The algorithm is fully scalable, as the resolution of voxelization can vary from a few centimeters, or even lower, up to tens or hundreds of meters, depending on the data acquisition technology and the intended use. We currently use a uniform voxel side length in all three coordinate directions. As the EO data are usually georeferenced, the voxel size is entered in meters. Furthermore, the grid coordinate system is oriented such that two axes are horizontal.

2.2. Clustering

Voxelization is followed by the identification of line segments, among which the basic symmetries will be searched. A line segment connects the central points of two interesting voxels in the same horizontal voxel grid slice. Furthermore, only line segments, which pass mainly through material voxels, are considered in this step. The basic symmetry detection between two line segments, where one lies on the objects’ surface and the other penetrates the air, is worthless. The default threshold for amount of material voxels in a regular line segment is set to 80%.
Two line segments may be symmetric only if they both have the same length. Therefore, the regular line segments are extracted in this step and arranged into clusters due to their lengths. Figure 3 demonstrates the impact of voxelization on the line segments lengths and the angles between pairs of them. Low-resolution voxelization can turn parallel line segments from Figure 3a into perpendicular ones in Figure 3b, or a line segment can even degenerate into a point. Higher-resolution voxelization shown in Figure 3c facilitates this problem. This confirms the meaningfulness of the aforementioned 80% threshold. Besides this, a user defines tolerances for the comparison of lengths and angles in the subsequent step of basic symmetries detection.

2.3. Basic Symmetry Detection

As this step runs separately for each horizontal voxel grid slice, it appears quite straightforward. However, the use of approximate geometry requires a short explanation, which shall be assisted by Figure 4. Note that the situation may be treated as a 2D problem, because the central points of all voxels in an individual horizontal slice have the same z-coordinate.
In the ideal geometry from Figure 4a, the test is performed as follows.
  • c 1 = ( p 1 + p 2 ) / 2 .
  • c 2 = ( p 3 + p 4 ) / 2 .
  • c = ( c 1 + c 2 ) / 2 .
  • Determine line S, such that:  ( S ( c 1 , c 2 ) ) ( c S ) .
  • Determine angle  α  between  ( p 1 , p 2 )  and S.
  • If the angle between S and  ( p 3 , p 4 )  is  α , then  p 1 p 2  and  p 3 p 4  are symmetric across S; otherwise, they are not.
In the approximate geometry from Figure 4b,c, points  p 1  to  p 4  are replaced by the voxel centers  v 1  to  v 4 , respectively. The test is then performed as follows.
  • If  | | v 1 v 2 | | v 3 v 4 | | > T d i s t a n c e  then exit without the symmetry detected.
  • c 1 = ( v 1 + v 2 ) / 2 .
  • c 2 = ( v 3 + v 4 ) / 2 .
  • c = ( c 1 + c 2 ) / 2 .
  • Determine line S, such that:  ( S ( c 1 , c 2 ) ) ( c S ) .
  • Determine angle  α 1  between  ( v 1 , v 2 )  and S.
  • Determine angle  α 2  between S and  ( v 3 , v 4 ) .
  • If  | α 1 α 2 |   T a n g l e , then  v 1 v 2  and  v 3 v 4  are symmetric across S; otherwise, they are not.
Smaller voxels in Figure 4c allow smaller tolerances  T d i s t a n c e  and  T a n g l e  in comparison to Figure 4b, but they significantly increase the time complexity. Note that this is computationally the most demanding task with the theoretical time complexity  O ( n 4 ) , where n is the number of voxels. This would be achieved if the voxel space had a single slice and each of the  O ( n 2 )  line segments was compared to all  O ( n 2 )  others. The previous steps of extracting interesting voxels and grouping them into clusters not only prevent the calculation of trivial (meaningless) symmetries, but, above all, reduce the number of pairs of line segments that need to be compared here. Note that the theoretical time complexity may differ from the aforementioned  O ( n 4 )  if some other criteria than the altitude is used for the grid slice generation, as explained in Section 4. However, the step of the basic symmetry detection remains the most demanding in any case.

2.4. Merging

The previous step determines all symmetry planes between the pairs of regular line segments. Next, the algorithm gradually joins all the pairs that share a common plane of symmetry. The thresholds  T a n g l e  and  T d i s t a n c e  are reused here to provide approximate plane matching. After the merging is completed in each individual cluster, the process continues to merge the results between the clusters in each individual slice and, finally, between the slices. The black diagonal line in Figure 5 shows the symmetry plane, detected in all eight voxel grid slices, representing the Maribor Cathedral. The slices are arranged from the lowest (Figure 5a) to the highest (Figure 5h). The results obviously allow merging. In this and all subsequent figures, symmetric points on both sides of the symmetry plane are colored red and blue, those in the symmetry plane are green, while the points out of the symmetry are gray. Note that the topmost slice in Figure 5h is not empty, as it contains a point hidden behind the line representing the symmetry plane.

2.5. Enhancing the Symmetries by Adding Non-Material Voxels

This subsection addresses both tasks, looped in step 6 of the algorithm, outlined at the end of Section 2:
  • Insertion of “non-interesting” material voxels into a symmetry S. The interesting voxels only establish symmetries between edges and curved parts of surfaces. This operation pairs the rest of the material (non-interesting) voxels, e.g., those on flat surfaces, with respect to the symmetry plane S.
  • Extending the symmetry S by mirroring its voxels accurately. The previous step mostly raises the density of voxels in S. However, in the case where the plane of symmetry is not parallel to one of the horizontal axes of the voxel grid, even this is not sufficient. Let us suppose a pair of symmetric interesting voxels  V i  and  V j . Due to the use of the approximate geometry, it may happen easily that some point from  V i  does not have its pair in  V j , but in some of its neighboring voxels. If such a voxel is a material one, we add it to S. This step significantly improves the quality of the results without spending too much time.

2.6. Post-Processing

The result of the previous steps is usually a huge number of detected partial reflection symmetries. A user needs the functionality of navigating through them interactively in a list, sorted from the strongest to the weakest symmetry. The symmetry measure is defined by (1):
S M ( S ) = S . v o x e l s I n . m a t e r i a l _ v o x e l s ,
where  S . v o x e l s  is the number of voxels in the symmetry S, and  I n . m a t e r i a l _ v o x e l s  is the total number of material voxels, derived from the input set  I n . Note that  0 < S M ( S ) 1 , where  S M ( S ) = 1  corresponds to global symmetries. Besides this sorting, some optional tasks can be performed in the post-processing phase:
  • Before sorting, smaller connected parts with too few voxels are filtered out from an individual symmetry. This is actually an iterative process, because the symmetric siblings of the filtered voxels must be removed as well, which can result in new small connected parts. This step is optional, as the part size threshold for filtering can be set to zero voxels. Note that a similar filtering is also performed on the total set of material voxels in the voxelization step.
  • Global symmetry detection does not require any special operation, as such symmetries are grouped at the top of the sorted list of partial symmetries. On the other hand, week symmetries with  S M ( S )  below the threshold may be omitted from the bottom of the list.
  • Local symmetry detection is run interactively by the user. This functionality is disabled before the partial symmetry detection is completed. Each connected component, intersected by the partial symmetry plane, represents the local symmetry. Optional filtering may be performed then, and a new sorted list is generated afterwards.

3. Results

In this section, the algorithm’s performance is demonstrated and analyzed on six different scenes from Slovenia. The tests were run on a personal computer with Intel i9-12900K CPU, 64 GB DDR5 RAM, and Windows 11 Education operating system. The input LAS files were acquired from the LiDAR GIS viewer [45], a publicly available software of the Slovenian Environment Agency. The map of the whole Slovenia is covered by 1 km2 tiles, scanned at a resolution of approx. 6 points per m 2 . Furthermore, LiDAR points in the files were classified as roofs (buildings), trees, and ground (plus unknown), by using the algorithm of Mongus et al. [9,10]. This gave us an opportunity to test the symmetry detection on different layers and their combinations. We did not use the entire 1 km2 tiles, but extracted smaller point clouds using the trimming functionality in our proprietary LiDAR data processing software. The considered Slovenian landmarks are displayed in Table 1, while Table 2 lists the parameter values used to control the approximate geometry and the number of accepted solutions.
Figure 6 demonstrates the impact of the voxel size on the detected symmetries. The strongest partial symmetries at 1 m and 3 m voxel resolutions (Figure 6b,e, respectively) are practically the same, while the strongest local symmetries (Figure 6c,f) differ significantly. The material voxels at 1 m resolution are distributed more sparsely and, thus, the connected regions are usually smaller. Another interesting observation is that the symmetry in Figure 6d is only at position 940 out of 4438, despite the visually very symmetrical floor plan of the castle with respect to the detected symmetry plane. The reason is that the three towers are all in the same (south, south-west, and west) half of the castle.
The results for the stadium (Figure 7a–c) are quite expected. Note that the strongest partial symmetry (Figure 7b) also includes, more or less randomly, some red and blue points on the roofs of the surrounding houses.
The point cloud of Bled Island (Figure 7d–f) was pre-processed to remove the lake surface first. The island has two distinct bilateral symmetries (Figure 7e,f), to which both the ground and the trees strongly conform. Both are so strong that they completely dominate the symmetries of the church and the other buildings on the island.
Punta Piran (Figure 7g–i), the cape of the Piran peninsula, is so densely built up that numerous symmetries are expectedly very strong. The strongest local symmetry is almost identical to the strongest partial symmetry in Figure 7h. The houses rise higher and higher from south-west to north-east, so the plane of the strongest symmetry was correctly expected to run in this direction. Note that the low angle tolerance of 1 degree (see Table 2) results in a large number of very similar symmetries, so that the most symmetrical street on the peninsula represents only the 14,055th strongest local symmetry out of 18,556.
The results for Slomšek square (Figure 8) are practically the same as those for the Grad castle (Figure 6). The strongest partial symmetries at 1 m and 3 m voxel resolutions (Figure 8b,e, respectively) are practically the same, while the strongest local symmetries (Figure 8c,f) differ significantly. The 2495th strongest partial symmetry out of 5082 in Figure 8d was chosen only, because it coincides with the strongest symmetry of the Maribor Cathedral at a very low resolution (Figure 5). At a slightly higher resolution, this church partial symmetry (Figure 9c,d) is only ranked 23rd out of 64, while the strongest partial symmetry (Figure 9a,b), interestingly, completely ignores the bell tower.
The numerical results from Table 3 were analyzed in two parts. In the first, we compared the figures for the Grad castle in 1 m and 3 m voxel resolutions. In the second part, we tried to draw some conclusions from the results for all six different test sets at 3 m voxel resolution. Although both experiments for the Grad castle started with 273,540 LiDAR points, all other figures are significantly higher at 1 m resolution. The number of voxels is higher by a factor of 26, number of material voxels by a factor of 7.5, and number of interesting voxels by 14. The number of the detected partial symmetries is 7.3 times higher, while the number of local symmetries is higher by factor 55. The latter is not a surprise, as a higher resolution results in worse connectivity of material voxels and, thus, in smaller but more frequent connected parts. As noted during the visual analysis of Figure 6, the strongest partial and local symmetries are nearly the same ( S M  values 60.47% and 56.58%), while the weakest ones differ significantly (0.91% and 0.01%). Note that, in six of the seven columns, the minimal threshold 5 (see Table 2) for the connected local symmetry regions was reached (row 22 of Table 3). Finally, the processing time at 1 m resolution is 84 times longer than at 3 m resolution (4.5 h against 3 min).
Table 3 displays the results that are understandable to users, and they mostly confirm the expectations. There are, however, some interesting outliers, which would require some additional data to be explained. The processing times (rows 9–11) seem to be correlated with the number of interesting voxels (row 6), as expected, but Punta Piran and Slomšek square are processed significantly slower than the Ljudski vrt stadium. An obvious visual difference between the three cases is that the points (and material voxels) distribution is much more compact in the cases of the square and the cape, and more sparse (distributed into separate groups) for the stadium. This is confirmed with the ratio of the number of local and partial symmetries (rows 8 and 7), which is significantly higher for the stadium. Another observation is that the ratio of the number of interesting voxels and material voxels is higher in urban areas (between 0.51 and 0.57 for the stadium, square, cathedral, and Punta) than in “forests” (0.30 for the castle and 0.31 for the island). However, this last assumption is probably incorrect, because the ratio for the Grad castle in 1 m resolution is 0.58. Eventual answers to these and other interesting questions are further discussed in the concluding Section 4.

4. Discussion

In this paper, we presented a new algorithm which detects partial, global, and local reflection symmetries in EO data. To our knowledge, specialized symmetry detection algorithms, designed to benefit from the particularities of EO data, did not exist before, which represented the main research challenge of the presented paper. The nature of these data allowed us to look only for symmetries with near-vertical symmetry planes, but it also made the task more difficult, as we had to design calculations with approximate geometry instead of the much simpler exact geometry. The implemented algorithm was tested and analyzed on LiDAR datasets of six urban and natural attractions in Slovenia of different scales and in different voxel resolutions. It computes thousands of solutions in a few seconds on a usual personal computer. The analysis mostly confirmed the expectations, but there were some interesting outliers, which cannot be explained by observing the inputs and outputs only, but rather require the internal states and data flows to be analyzed as well. In particular, the number of line segments and their distribution into clusters and voxel slices must be studied, together with the distribution of interesting voxels. Overpopulated clusters and slices are assumed to be the main reason for slower performance in some cases. Furthermore, this analysis must be conducted separately for individual data layers and for their different combinations.
Related to the data layers, a new functionality is also planned. It will separately detect symmetries in individual layers (buildings, trees, ground) and then merge them at the topmost merging level. Namely, it usually makes no sense that a voxel on a tree and a voxel on a roof are treated as a symmetric pair.
In a similar manner, the algorithm can be adapted to handle other geospatial data than just bare LiDAR points, where only the geographic coordinates and altitude are considered currently. The prerequisite is that the data are voxelized in 3D or rasterized in 2D. In the case of pure geometric representations, such as irregular triangular networks, where we are not interested in the color or any other measurement or attribute value; this is sufficient. Regardless of the origin of the data, pre-processing may also involve interpolation, decimation, or other ways of changing the resolution. When inserting samples into a grid, it is also important to have in mind that each grid cell will be assigned a single value, e.g., the average of the contained samples. Grayscale and color raster images (satellite imagery, aerial photos, orthophotos, quantized GIS attribute data) may then be processed easily, if colors are interpreted as altitudes. Each color or some range of similar colors then represents a slice. However, a problem is the formation of objects when combining the symmetries of different slices, as the slices often follow each other in a sequence that does not have a proper geometric interpretation. Therefore, the concept of partial symmetry works here without constraints, while the concept of local symmetry is often limited to the adjacency of voxels within a slice. Finally, the approach from the planned tool for handling classified LiDAR data is applicable for values stored in a 3D voxel grid, i.e., the repertoire of values needs to be partitioned into classes that will be handled separately. Let us also note that, when detecting symmetry in non-geometric data, the concept of the interesting voxel may differ from the described one and must be carefully planned.
Another useful functionality, developed already, is an interactive tool that enables a user to interactively enter a reference symmetry plane. The symmetries are then sorted with respect to the similarity of their symmetry planes to the reference one. When a user wants to find out how some particular symmetry is ranged, scrolling through hundreds or thousands of solutions is not of good assistance. The functionality is implemented for both the partial and local symmetry. It was used in this paper already to produce Figure 7i and Figure 8d.
The presented paper does not focus on any specific application of detected symmetries. However, the large number of symmetries found, ordered by the proposed symmetry measure (1), is suitable for further analysis. For example, a locally bounded scene (object) with two distinctly strong symmetries, with orthogonal symmetry planes, is quite likely to be an artificial object, e.g., a building. One can also look for patterns of similar symmetries at different locations (similar objects have similar sets of symmetries). Such combinations can also be used for machine learning.

Author Contributions

Conceptualization, D.P.; methodology, D.P.; software, L.L.; validation, D.P. and B.Ž.; formal analysis, D.P. and B.Ž.; investigation, D.P., L.L. and B.Ž.; resources, B.Ž.; data curation, B.Ž.; writing—original draft preparation, D.P.; writing—review and editing, B.Ž.; visualization, D.P. and L.L.; supervision, B.Ž. and D.P.; project administration, B.Ž. and D.P.; funding acquisition, B.Ž. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Slovenian Research and Innovation Agency under Research Project N2-0181 and Research Programme P2-0041.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The source code of the symmetry detection software is available at https://github.com/luckyLukac/ReflectionSymmetryDetection, accessed on 18 July 2023. LAS tiles of the entire Slovenian territory are publicly accessible at http://gis.arso.gov.si/evode/profile.aspx?id=atlas_voda_Lidar@Arso&culture=en-US, accessed on 18 July 2023. For information about the trimmed datasets used in the experiments, software for clipping LAS files, etc., please contact the author L.L. by email.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DEMDigital elevation model
EOEarth observation
GISGeographic information system
GPSGlobal Positioning System
LASLAS (LASer) File Format
LiDARLight detection and ranging
OECDThe Organisation for Economic Co-operation and Development
PCAPrimary component analysis
SMSymmetry measure

References

  1. Petitjean, M. A definition of symmetry. Symmetry Cult. Sci. 2007, 18, 99–119. [Google Scholar]
  2. Tyler, C.W. Human Symmetry Perception and Its Computational Analysis; Psychology Press: London, UK, 2003. [Google Scholar]
  3. Bertamini, M.; Makin, A.D. Brain activity in response to visual symmetry. Symmetry 2014, 6, 975–996. [Google Scholar] [CrossRef]
  4. Žalik, B.; Strnad, D.; Kohek, Š; Kolingerová, I.; Nerat, A.; Lukač, N.; Podgorelec, D. A hierarchical universal algorithm for geometric objects’s reflection symmetry detection. Symmetry 2022, 14, 1060. [Google Scholar] [CrossRef]
  5. Machilsen, B.; Pauwels, M.; Wagemans, J. The role of vertical mirror symmetry in visual shape detection. J. Vis. 2009, 9, 11. [Google Scholar] [CrossRef]
  6. Li, Y.F.; Pizlo, Z.; Steinman, R.M. A computational model that recovers the 3D shape of an object from a single 2D retinal representation. Vis. Res. 2009, 49, 979–991. [Google Scholar] [CrossRef]
  7. Kansakar, P.; Hossain, F. A review of applications of satellite earth observation data for global societal benefit and stewardship of planet earth. Space Policy 2016, 36, 46–54. [Google Scholar] [CrossRef]
  8. Sentinel Online. Available online: https://sentinels.copernicus.eu/web/sentinel/home (accessed on 13 August 2023).
  9. Mongus, D.; Lukač, N.; Žalik, B. Ground and building extraction from LiDAR data based on differential morphological profiles and locally fitted surfaces. ISPRS J. Photogramm. Remote Sens. 2014, 93, 145–156. [Google Scholar] [CrossRef]
  10. Mongus, D.; Žalik, B. An efficient approach to 3D single tree-crown delineation in LiDAR data. ISPRS J. Photogramm. Remote Sens. 2015, 108, 219–233. [Google Scholar] [CrossRef]
  11. Cukjati, J.; Mongus, D.; Žalik, K.R.; Žalik, B. IoT and satellite sensor data integration for assessment of environmental variables: A case study on NO2. Sensors 2022, 22, 5660. [Google Scholar] [CrossRef]
  12. Earth Observation for Decision Making. Available online: https://www.oecd.org/environment/indicators-modelling-outlooks/Earth_Observation_for_Decision_Making.pdf (accessed on 10 August 2023).
  13. Kerber, J.; Bokeloh, M.; Wand, M.; Seidel, H.-P. Scalable symmetry detection for urban scenes. Comput. Graph. Forum 2013, 32, 3–15. [Google Scholar] [CrossRef]
  14. Wang, H.; Zhang, W.; Chen, Y.; Chen, M.; Yan, K. Semantic decomposition and reconstruction of compound buildings with symmetric roofs from LiDAR data and aerial imagery. Remote Sens. 2015, 7, 13945–13974. [Google Scholar] [CrossRef]
  15. Tu, J.; Sui, H.; Feng, W.; Sun, K.; Xu, C.; Han, Q. Detecting building facade damage from oblique aerial images using local symmetry feature and the Gini Index. Remote Sens. Lett. 2017, 8, 676–685. [Google Scholar] [CrossRef]
  16. Clode, S.; Rottensteiner, F.; Kootsookos, P.; Zelniker, E. Detection and vectorization of roads from LiDAR data. Photogramm. Eng. Remote Sens. 2007, 73, 517–535. [Google Scholar] [CrossRef]
  17. Gézero, L.; Antunes, C. Automated three-dimensional linear elements extraction from mobile LiDAR point clouds in railway environments. Infrastructures 2019, 4, 46. [Google Scholar] [CrossRef]
  18. Muth, C.C.; Bazzaz, F.A. Tree canopy displacement and neighborhood interactions. Can. J. For. Res. 2003, 33, 1323–1330. [Google Scholar] [CrossRef]
  19. Zhang, C.; Zhou, Y.; Qiu, F. Individual tree segmentation from LiDAR point clouds for urban forest inventory. Remote Sens. 2015, 7, 7892–7913. [Google Scholar] [CrossRef]
  20. Jesenko, D.; Hruda, L.; Kolingerová, I.; Žalik, B.; Podgorelec, D. Symmetry-based Method for Water Level Prediction using Sentinel 2 Data. Sens. Transducers 2022, 256, 12–18. [Google Scholar]
  21. Mitra, N.J.; Guibas, L.J.; Pauly, M. Partial and approximate symmetry detection for 3d geometry. ACM Trans. Graph. TOG 2006, 25, 560–568. [Google Scholar] [CrossRef]
  22. Mitra, N.J.; Pauly, M.; Wand, M.; Ceylan, D. Symmetry in 3D geometry: Extraction and applications. Comput. Graph. Forum 2013, 32, 1–23. [Google Scholar] [CrossRef]
  23. Chen, H.; Wang, L.; Zhang, Y.; Wang, C. Dominant Symmetry Plane Detection for Point-Based 3D Models. Adv. Multimed. 2020, 2020, 8861367. [Google Scholar]
  24. Schiebener, D.; Schmidt, A.; Vahrenkamp, N.; Asfour, T. Heuristic 3D Object Shape Completion Based on Symmetry and Scene Context. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 74–81. [Google Scholar]
  25. Combés, B.; Hennessy, R.; Waddington, J.; Roberts, N.; Prima, S. Automatic Symmetry Plane Estimation of Bilateral Objects in Point Clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  26. Ecins, A.; Fermüller, C.; Aloimonos, Y. Detecting Reflectional Symmetries in 3D Data Through Symmetrical Fitting. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 1779–1783. [Google Scholar]
  27. Nagar, R.; Raman, S. Detecting Approximate Reflection Symmetry in a Point Set Using Optimization on Manifold. IEEE Trans. Signal Process. 2019, 67, 1582–1595. [Google Scholar] [CrossRef]
  28. Hruda, L.; Kolingerová, I.; Váša, L. Robust, fast and flexible symmetry plane detection based on differentiable symmetry measure. Vis. Comput. 2022, 38, 555–571. [Google Scholar] [CrossRef]
  29. Elawady, M.; Ducottet, C.; Alata, O.; Barat, C.; Colantoni, P. Wavelet-Based Reflection Symmetry Detection via Textural and Color Histograms: Algorithm and Results. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 1734–1738. [Google Scholar]
  30. Li, B.; Johan, H.; Ye, Y.; Lu, Y. Efficient 3D Reflection Symmetry Detection: A View-based Approach. Graph. Models 2016, 83, 2–14. [Google Scholar] [CrossRef]
  31. Sipiran, I.; Gregor, R.; Schreck, T. Approximate symmetry detection in partial 3d meshes. Comput. Graph. Forum 2014, 33, 131–140. [Google Scholar] [CrossRef]
  32. Sun, C.; Sherrah, J. 3D Symmetry Detection Using the Extended Gaussian Image. IEEE Trans. Pattern Anal. 1997, 19, 164–168. [Google Scholar]
  33. Kakarala, R.; Kaliamoorthi, P.; Premachandran, V. Three-Dimensional Bilateral Symmetry Plane Estimation in the Phase Domain. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 249–256. [Google Scholar]
  34. Korman, S.; Litman, R.; Avidan, S.; Bronstein, A. Probably Approximately Symmetric: Fast Rigid Symmetry Detection with Global Guarantees. Comput. Graph. Forum 2015, 34, 2–13. [Google Scholar] [CrossRef]
  35. Podolak, J.; Shilane, P.; Golovinsky, A.; Rusinkiewicz, S.; Funkhouser, T. A Planar-Reflective Symmetry Transform for 3D Shapes. ACM Trans. Graph. 2006, 25, 549–559. [Google Scholar] [CrossRef]
  36. Speciale, P.; Oswald, M.R.; Cohen, A.; Pollefeys, M. A Symmetry Prior for Convex Variational 3D Reconstruction. In Computer Vision–ECCV 2016, Lecture Notes in Computer Science 9912; Springer: Cham, Germany, 2016; pp. 313–328. [Google Scholar]
  37. Simari, P.D.; Kalogerakis, E.; Singh, K. Folding meshes: Hierarchical Mesh Segmentation Based on Planar Symmetry. In Proceedings of the 4th Eurographics Symposium on Geometry Processing, Cagliary, Italy, 26–28 June 2006; pp. 111–119. [Google Scholar]
  38. Cailliere, D.; Denis, F.; Pele, D.; Baskurt, A. 3d mirror symmetry detection using Hough transform. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, IEEE, San Diego, CA, USA, 12–15 October 2008; pp. 1772–1775. [Google Scholar]
  39. Hruda, L.; Kolingerová, I.; Podgorelec, D. Local Reflectional Symmetry Detection in Point Clouds Using a Simple PCA-Based Shape Descriptor. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications—VISIGRAPP 2023—Vol. 1: GRAPP, Lisbon, Portugal, 19–21 February 2023; pp. 52–63. [Google Scholar]
  40. Ji, P.; Liu, X. A Fast and Efficient 3D Reflection Symmetry Detector Based on Neural Networks. Multimed. Tools Appl. 2019, 78, 35471–35492. [Google Scholar] [CrossRef]
  41. Wu, Z.; Jiang, H.; He, S. Symmetry Detection of Occluded Point Cloud Using Deep Learning. Procedia Comput. Sci. 2021, 183, 32–39. [Google Scholar] [CrossRef]
  42. Gao, L.; Zhang, L.-X.; Meng, H.-Y.; Ren, Y.-H.; Lai, Y.-K.; Kobbelt, L. PRS-Net: Planar Reflective Symmetry Detection Net for 3D Models. IEEE Trans. Vis. Comput. Graph. 2021, 27, 3007–3018. [Google Scholar] [CrossRef]
  43. Tsogkas, S.; Kokkinos, I. Learning-based symmetry detection in natural images. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Springer: Berlin/Heidelberg, Germany; pp. 41–54. [Google Scholar]
  44. Podgorelec, D.; Lukač, L.; Žalik, B. Local reflection symmetry detection in Earth observation data. In Proceedings of the Middle-European Conference on Applied Theoretical Computer Science: Information Society—IS 2022: Proceedings of the 25th International Multiconference, Koper, Slovenia, 13–14 October 2022; Volume I, pp. 37–40.
  45. LiDAR GIS Viewer. Available online: http://gis.arso.gov.si/evode/profile.aspx?id=atlas_voda_Lidar@Arso&culture=en-US (accessed on 18 July 2023).
Figure 1. Global, partial, and local reflection symmetry demonstrated on a rasterized letter C: (a) input object and its two obvious axes of symmetry (green); (b) global symmetry with the horizontal axis splitting the object into two symmetric (blue and red) halves; (c) the strongest partial symmetry (next to the global one) and the remaining (gray) object’s pixels; (d,e) two strongest local symmetries obtained from the considered partial one.
Figure 1. Global, partial, and local reflection symmetry demonstrated on a rasterized letter C: (a) input object and its two obvious axes of symmetry (green); (b) global symmetry with the horizontal axis splitting the object into two symmetric (blue and red) halves; (c) the strongest partial symmetry (next to the global one) and the remaining (gray) object’s pixels; (d,e) two strongest local symmetries obtained from the considered partial one.
Sensors 23 07426 g001
Figure 2. Classification of material voxels according to their surroundings: (ai) a voxel on a flat surface of material voxels (colored gray) is uninteresting (colored red) regardless of the status of the other (white) voxels; (j) an example of an interesting (green) voxel on a convex edge surrounded by empty (light blue) voxels.
Figure 2. Classification of material voxels according to their surroundings: (ai) a voxel on a flat surface of material voxels (colored gray) is uninteresting (colored red) regardless of the status of the other (white) voxels; (j) an example of an interesting (green) voxel on a convex edge surrounded by empty (light blue) voxels.
Sensors 23 07426 g002
Figure 3. The effect of voxelization on the lengths of line segments and the angles between them: (a) four parallel line segments among pairs of input points; (b) too large voxels can even cause parallel (blue and red) line segments to become perpendicular or a line segment (yellow) to degenerate into a point; (c) for smaller voxels, the deviations and, thus, the tolerances required are significantly smaller.
Figure 3. The effect of voxelization on the lengths of line segments and the angles between them: (a) four parallel line segments among pairs of input points; (b) too large voxels can even cause parallel (blue and red) line segments to become perpendicular or a line segment (yellow) to degenerate into a point; (c) for smaller voxels, the deviations and, thus, the tolerances required are significantly smaller.
Sensors 23 07426 g003
Figure 4. Basic symmetry detection between two line segments: (a) exact symmetry between the line segments  p 1 p 2  and  p 3 p 4 ; (b) approximate symmetry in a low-resolution voxel space where  p 1 p 2 p 3 , and  p 4  are replaced by the central points  v 1 v 2 v 3 , and  v 4  of the corresponding voxels; (c) approximate symmetry in a voxel space with higher resolution.
Figure 4. Basic symmetry detection between two line segments: (a) exact symmetry between the line segments  p 1 p 2  and  p 3 p 4 ; (b) approximate symmetry in a low-resolution voxel space where  p 1 p 2 p 3 , and  p 4  are replaced by the central points  v 1 v 2 v 3 , and  v 4  of the corresponding voxels; (c) approximate symmetry in a voxel space with higher resolution.
Sensors 23 07426 g004
Figure 5. Eight horizontal slices of the grid of 500 voxels, representing the strongest partial symmetry, detected on the Maribor Cathedral. The slices appear in order from: (ah) the bottommost to the topmost.
Figure 5. Eight horizontal slices of the grid of 500 voxels, representing the strongest partial symmetry, detected on the Maribor Cathedral. The slices appear in order from: (ah) the bottommost to the topmost.
Sensors 23 07426 g005
Figure 6. Grad Grad (the Grad castle) test case: (a) digital orthophoto; (b) strongest partial symmetry on 1 m voxels; (c) strongest local symmetry on 1 m voxels; (d) 940th strongest partial symmetry on 3 m voxels; (e) strongest partial symmetry on 3 m voxels; (f) strongest local symmetry on 3 m voxels.
Figure 6. Grad Grad (the Grad castle) test case: (a) digital orthophoto; (b) strongest partial symmetry on 1 m voxels; (c) strongest local symmetry on 1 m voxels; (d) 940th strongest partial symmetry on 3 m voxels; (e) strongest partial symmetry on 3 m voxels; (f) strongest local symmetry on 3 m voxels.
Sensors 23 07426 g006
Figure 7. Maribor “Ljudski vrt” stadium, Bled Island, and Punta Piran (Piran cape) test cases. Stadium: (a) digital orthophoto; (b) strongest partial symmetry; (c) strongest local symmetry. Island: (d) digital orthophoto; (e) strongest partial symmetry; (f) strongest local symmetry. Cape: (g) digital orthophoto; (h) strongest partial symmetry; (i) 14,055th strongest partial symmetry.
Figure 7. Maribor “Ljudski vrt” stadium, Bled Island, and Punta Piran (Piran cape) test cases. Stadium: (a) digital orthophoto; (b) strongest partial symmetry; (c) strongest local symmetry. Island: (d) digital orthophoto; (e) strongest partial symmetry; (f) strongest local symmetry. Cape: (g) digital orthophoto; (h) strongest partial symmetry; (i) 14,055th strongest partial symmetry.
Sensors 23 07426 g007aSensors 23 07426 g007b
Figure 8. Slomšek square Maribor test case: (a) digital orthophoto; (b) strongest partial symmetry on 1 m voxels; (c) strongest local symmetry on 1 m voxels; (d) 2495th strongest partial symmetry on 3 m voxels; (e) strongest partial symmetry on 3 m voxels; (f) strongest local symmetry on 3 m voxels.
Figure 8. Slomšek square Maribor test case: (a) digital orthophoto; (b) strongest partial symmetry on 1 m voxels; (c) strongest local symmetry on 1 m voxels; (d) 2495th strongest partial symmetry on 3 m voxels; (e) strongest partial symmetry on 3 m voxels; (f) strongest local symmetry on 3 m voxels.
Sensors 23 07426 g008
Figure 9. The Maribor Cathedral test case: (a) strongest partial symmetry; (b) strongest partial symmetry—side view; (c) 23rd strongest partial symmetry; (d) 23rd strongest partial symmetry—side view.
Figure 9. The Maribor Cathedral test case: (a) strongest partial symmetry; (b) strongest partial symmetry—side view; (c) 23rd strongest partial symmetry; (d) 23rd strongest partial symmetry—side view.
Sensors 23 07426 g009
Table 1. Selected testing landmarks in Slovenia.
Table 1. Selected testing landmarks in Slovenia.
LandmarkPart of SloveniaGPS (N, E)Area [m × m]Figure
Grad castleNorth-East(46.800, 16.096)140 × 140Figure 6
Ljudski vrtNorth-East46.563, 15.640)280 × 240Figure 7a–c
Bled IslandNorth-West(46.362, 14.090)150 × 200Figure 7d–f
Punta PiranSouth-West(45.529, 13.567)400 × 300Figure 7g–i
Slomšek squareNorth-East(46.559, 15.645)240 × 160Figure 8
Maribor CathedralNorth-East(46.559, 15.645)80 × 50Figure 5 and Figure 9
Table 2. Test parameters.
Table 2. Test parameters.
ParameterValue
Min. number of interesting voxels in connected region (pre-processing filter)5
Min. number of voxels in connected symmetrical region (post-processing filter)5
Distance tolerance [m]0.1
Angle tolerance [deg.]1
Min. length of line segment [voxel]4
Min. connected region in local symmetry [voxel]5
Table 3. Settings and results for six test cases in Slovenia.
Table 3. Settings and results for six test cases in Slovenia.
RowMeasureGrad CastleGrad CastleLjudski VrtSlomšek SquareMaribor CathedralBled IslandPunta Piran
1Layers 1B+TB+TBB+TBB+T+GB
2Points273,540273,540126,71488,10711,779219,18699,413
3Voxel size [m]1333333
4Voxels1,629,90962,16998,49642,4805415952025,326
5Material voxels89,85112,1904507486470417383114
6Interesting voxels52,1423721257924783805381699
7Partial symmetries32,2634438686350826497912,177
8Local symmetries476,393861624,40415,62973103018,556
9Time [s] for partial symmetries15,74118223350.032.3844
10Time [s] for local symmetries2879240.010.36
11Total time [s]16,02819125390.042.6850
12Points in strongest partial symmetry75,182166,96868,39345,3597626189,63263,851
13Voxels in strongest partial symmetry23,59773722050267536114121798
14% of voxels in strongest partial symmetry26.2660.4745.4854.9951.2781.2457.73
15Points in weakest partial symmetry242703796127311942810440
16Voxels in weakest partial symmetry71121931661 3816
17% of voxels in weakest partial symmetry0.010.910.420.639.372.180.51
18Points in strongest local symmetry23,353154,95849,19142,3937415189,40360,229
19Voxels in strongest local symmetry738668981346248834914051691
20% of voxels in strongest local symmetry8.2256.5829.8651.1549.5780.8454.3
21Points in weakest local symmetry6146914021148115
22Voxels in weakest local symmetry5555755
23% of voxels in weakest local symmetry0.010.040.110.10.990.280.16
1 B—buildings, G—ground, T—trees.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Podgorelec, D.; Lukač, L.; Žalik, B. Reflection Symmetry Detection in Earth Observation Data. Sensors 2023, 23, 7426. https://doi.org/10.3390/s23177426

AMA Style

Podgorelec D, Lukač L, Žalik B. Reflection Symmetry Detection in Earth Observation Data. Sensors. 2023; 23(17):7426. https://doi.org/10.3390/s23177426

Chicago/Turabian Style

Podgorelec, David, Luka Lukač, and Borut Žalik. 2023. "Reflection Symmetry Detection in Earth Observation Data" Sensors 23, no. 17: 7426. https://doi.org/10.3390/s23177426

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop