Immersive Visual Information Mining for Exploring the Content of EO Archives
Babaee, Mohammadreza1; Bahmanyar, Gholamreza2
1Institute for Human-Machine Communication, TUM, GERMANY; 2Munich Aerospace Faculty, DLR, GERMANY

The volume of collected EO data is increasing immensely with a rate of several Terabytes of data a day. With the current EO technologies these figure will be soon amplified, the horizons are beyond Zettabytes. At the same time, the need for timely delivery of focused information for decision making is increasing.

Among the solutions proposed recently to access the EO data content was the Image Information Mining IIM approach[1]. IIM is based on a hierarchal information representation: image features extraction, data reduction and catalog generation and a Human Centered active learning for the semantic interpretation of the image contents. Further development focused on algorithms for auto-annotation and discovery of new categories, Latent Semantic Annotation, cascade and multi grid methods for learning with reduced examples sets in huge data repositories[2,3]. However, one of the challenges still remaining is how to communicate to a human the essential information from such huge data massive.

The focus of this article is the presentation of an alternative solution: the Immersive Visual Information Mining approach. The EO data user is immersed in a Cave Automatic Virtual Environment (CAVE), and interact with the parameter space views and images, in order to interactively, time efficiently explore very large EO data volumes for identification of particular grouping, outliers, and perform a first quantitative analysis.

The propose method is further based on a hierarchical information representation. In a first step the EO images, in our study multispectral and SAR VHR images, are partitioned in patches with the size adapted to capture meaningful contextual information. Experimentally the optimal size was found to be of ca. 200 pixels, independently on the image resolution. (Popescu, GEIRIS chi ren Syiou. A crucial operation is to index the content image by extracting a summary and represent it in a descriptor. Generally the descriptors capture different properties, e.g. radiometric, phase, or geometry, thus will not describe the same information content. A library of specific descriptors for multispectral and SAR is used: it comprises spectral-SIFT [7], spectral-WLD[8], color-histogram, color-SIFT. Thus, the whole archive is represented in the n-dimensional space of the extracted features, each patch being a point.

The core part of the processing is the dimensionality reduction performed by different algorithms like: Non-negative Matrix Factorization [4], Laplacian Eigenmaps [5], and Stochastic neighbor Embedding [6].

Non-negative Matrix Factorization (NMF) is a low rank matrix approximation technique that factorize a matrix with non-negative elements to two non-negative matrices. This factorization is not unique and a variety of algorithms have been proposed based on different constraints. NMF has been used widely in data mining applications.

Laplacian eigenmaps is a nonlinear data reduction algorithm based on spectral techniques. The main idea here is to represent the data as a low dimensional curve in a high dimensional space. The representation of this curve is done with a graph which is built in such a way that data point are the nodes of this graph and the edges are based on K-nearest neighbor technique.

Stochastic Neighbor Embedding is a probabilistic approach that aims to preserve the neighborhood identities in the low dimensional representation of data. Here, the Gaussian distribution is used for each data point i to compute the probability of having data point j as its neighbor. The same construction of neighborhood is also done in a low-dimensional space and the goal is to match these two constructions as much as possible.

The exploration of the image archive is effectuated by representing the 3-doimensional projected space in the immersive environment, the CAVE. This comprises four walls of a room-sized as display, a cluster of PC’s for rendering, a master PC for controlling and synchronization of other PC’s, A data reduction console (i.e., a PC running data reduction algorithms), and finally a peripheral device server to capture the signals from controllers. In this system, seven powerful computers work together in order to have an immersive data visualization. All these machines are connected in a TCP/IP network and the master PC has the responsibility for super controlling of other machines. Each low-dimensional feature is interpreted as the position of images in the CAVE. The user can move around this virtual environment to see the data in different views and has also the ability to cluster the data manually. One demo of visualization is given in [9].

For the validation and demonstration of the developed methods a test data base of TerraSAR-X and multispectral data is used. The validation of the results is performed by comparison with clustering and topic discovery based on Latent Semantic Analysis.

References

[1]. D. A. Keim,, et al., 2004, Visual Data Mining in Large Geospatial Point Sets, IEEE Computer Graphics and Applications, Sept./Oct. 2004, pp. 36 – 44.
[2]. J. Ontrup, 2009, Detecting, Assessing, and Monitoring Relevant Topics in Virtual Information Environments, IEEE TKDE, Vol. 21, No. 3, pp. 415 – 427.
[3]. M. Datcu, et. al. , 2003, Information mining in remote sensing image archives: system concepts. IEEE Transactions on Geoscience and Remote Sensing, 41(12):2923 – 2936
[4]. B. Schuller, F. Weninger, M. Wollmer, Y. Sun, and G. Rigoll: Non-Negative Matrix Factorization as Noise-Robust Feature Extractor for Speech Recognition. Proc. Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP), 2010, Dallas, pp. 4562–4565.
[5]. Mikhail Belkin and Partha Niyogi, Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering, Advances in Neural Information Processing Systems 14, 2001, p. 586–691, MIT Press
[6]. Hinton, G., & Roweis, S. (2002). Stochastic neighbor embedding. Advances in neural information processing systems, 15, 833-840.
[7]. Lowe, D. G. (1999). Object recognition from local scale-invariant features. InComputer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on (Vol. 2, pp. 1150-1157). Ieee.
[8]. Chen, J., Shan, S., He, C., Zhao, G., Pietikainen, M., Chen, X., & Gao, W. (2010). WLD: a robust local image descriptor. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(9), 1705-1720.
[9].http://www.mmk.ei.tum.de/layout.php?selectedMain=Personen&selectedSub=Homes&Special=rez