The field of data analysis (e.g., data mining, machine learning, algorithm engineering, ...) has gained more and more attention in recent years. One of the reasons for this phenomenon is the fact that the data volumes have increased dramatically during the last decade, leading to so-called big data-problems. This is the case, for instance, in astronomy, where current and upcoming projects like the Sloan Digital Sky Survey (SDSS) or the Large Synoptic Sky Telescope (LSST) gather and will gather data in the tera- and petabyte range. For such projects, the sheer data volume renders a manual analysis impossible, and this necessitates the use of automatic data analysis tools.
The corresponding data-rich scenarios often involve a large number of patterns (e.g., number of galaxy images) and/or a large number of dimensions (e.g., pixels per image). Further, a general lack of "labeled data" can often be observed, since the manual interaction with experts can be very time-consuming. Dealing with these situations usually requires the adaptation of standard data analysis techniques, and this is part of my research. In particular, I am interested in the following research fields/projects:
Semi- and Unsupervised Support Vector Machines
The task of classifying patterns is among the most prominent ones in the field of machine learning. Support vector machines depict state-of-the-art tools for this task and have been extended to various learning settings including other supervised learning tasks (e.g., regression or preference learning) but also to so-called semi- and unsupervised scenarios.
Among these extensions are, for instance, semi-supervised support vector machines, which take additional unlabeled patterns into account (left: black points). This additional information reveals more information about the "structure" of the data and can lead to models with a better performance.
In some cases, no labeled patterns at all are given. This leads to the so-called maximum margin clustering problem. While being very appealing from a practical point of view, both variants induce difficult combinatorial optimization problems, which renders a direct application of these extensions difficult.
Developing efficient optimization schemes for these variants is part of my research; see below for corresponding publications or here for an implemetation. Both support vector machines as well as their extensions can successfully by applied for, e.g., text data, which stems from various application domains like e-commerce or social media.
In recent years, there has been a significant increase in energy produced by sustainable resources like wind- and solar power plants. This led to a shift of traditional energy systems to so-called smart grids (i. e., distributed systems of energy suppliers and consumers). While the sustainable energy resources are very appealing from an environmental point of view, their volatileness renders the integration into the overall energy system difficult.
For this reason, short-term wind and solar energy prediction systems are essential for balance authorities to schedule spinning reserves and reserve energy. This task can be formalized as regression problem (with patterns based on, e.g., wind turbine measurements), and the resulting models are well-suited for short-term forecasting scenarios, see below for details.
Big Data in Astronomy
Modern telescopes and satellites can gather huge amounts of data. Current catalogs, for instance, contain data in the terabyte range; upcoming projects will encompass petabytes of data. On the one hand, this data-rich situation offers the opportunity to make new discoveries like detecting new, distant objects. On the other hand, managing such data volumes can be very difficult and usually leads to problem-specific challenges.
I am involved in the development of redshift estimation models (e.g., regression models) for so-called quasi-stellar radio sources (quasars), which are among the most distant objects that can be observed from Earth. To efficiently process the large data volumes, we make use of spatial data structures (like k-d-trees), which can be applied for various other tasks as well. See the publications below for more details.
Given the huge data volumes encountered in, e.g., large-scale text mining scenarios (or the ones in astronomy), the running time needed to build appropirate data mining models (training phase) and the application of the final models (test phase) can depict one of the main bottlenecks during the data analysis process. Further, given data volumes that exceed the capacities of the main memory of standard computer systems, the transfer of data can significantly slow down both phases. A desirable goal is, in general, the reduction of the practical runtime needed for the various stages of overall data analysis process, see below for more information.
A complete list of my publications can be found here.
Fabian Gieseke, Tapio Pahikkala, and Christian Igel. Polynomial Runtime Bounds for Fixed-Rank Unsupervised Least-Squares Classification. In Proceedings of the 5th Asian Conference on Machine Learning (ACML). 2013, 62-71.
Fabian Gieseke, Antti Airola, Tapio Pahikkala, and Oliver Kramer. Fast and Simple Gradient-Based Optimization for Semi-Supervised Support Vector Machines. Neurocomputing (ICPRAM 2012 Special Issue) 123(10):23-32, 2014.
Tapio Pahikkala, Antti Airola, Fabian Gieseke, and Oliver Kramer. Unsupervised Multi-Class Regularized Least-Squares Classification. In Proceedings of the 12th IEEE International Conference on Data Mining (ICDM). 2012, 585-594.
Kai Polsterer, Fabian Gieseke, Christian Igel, and Tomotsugu Goto. Improving the Performance of Photometric Regression Models via Massive Parallel Feature Selection. In Proceedings of the 23rd Annual Astronomical Data Analysis Software & Systems conference (ADASS). 2013.
Fabian Gieseke, Kai Polsterer, and Peter Zinn. Photometric Redshift Estimation of Quasars: Local versus Global Regression. In Proceedings of the Astronomical Data Analysis Software & Systems (ADASS). 2011.
Fabian Gieseke, Kai Lars Polsterer, Andreas Thom, Peter Zinn, Dominik Bomans, Ralf-Jürgen Dettmar, Oliver Kramer, and Jan Vahrenhold. Detecting Quasars in Large-Scale Astronomical Surveys. In Proceedings of the 9th International Conference on Machine Learning and Applications (ICMLA). 2010, 352-357.
Oliver Kramer, Nils Treiber, and Fabian Gieseke. Machine Learning in Wind Energy Information Systems. In EnviroInfo. 2013, 16-24.
Oliver Kramer and Fabian Gieseke. Short-Term Wind Energy Forecasting Using Support Vector Regression. In Proceedings of the International Conference on Soft Computing Models in Industrial and Environmental Applications. 2011, 271-280.
Fabian Gieseke, Justin Heinermann, Cosmin Oancea, and Christian Igel. Buffer k-d Trees: Processing Massive Nearest Neighbor Queries on GPUs. In Proceedings of the 31st International Conference on Machine Learning (ICML). 2014. Accepted.
Fabian Gieseke, Joachim Gudmundsson, and Jan Vahrenhold. Pruning Spanners and Constructing Well-Separated Pair Decompositions in the Presence of Memory Hierarchies. Journal of Discrete Algorithms (JDA) 8(3):259-272, 2010.