Variational Methods in Image and Data Processing

Potts and Mumford-Shah functionals appear in various variants in image processing, inverse problems,  sparse approximation, compressed sensing as well as in statistics. In statistics — where their earliest appearence might be tracked back to — they are typically applied for nonparametric regression: a function is fit to data where the noise should be eliminated retaining the relevant information. This is done by minimizing functionals consisting of a data term and a regularizing term. In the case of  (discrete) Potts functionals,  the regularizing term penalizes the number of jumps of the target variable. The continuous multivariate domain analogue is the length of the boundary curve (measured with respect to the outer Haussdorf measure when the boundary is rough). The Mumford-Shah regularizer allows for smooth transitions on the complement of the edge set. These regularizing terms are nonsmooth and nonconvex which makes treating the corresponding functionals both algorithmically and analytically challenging.


uPotts Denoising a cartoon image using our algorithm for the bivariate Potts problem .

Potts functionals for directly measured data. We start with vector space valued directly measured data. Here the central task is denoising and the data term consists of some p-norm induced distance beween the data and the target variable. For Gaussian noise, L2 norms are suitable, whereas, for noise with heavier tails, often a L1 data term turns out to be the better choice.

For multivariate Potts-Funktionals we developed a struturally new algorithm which is capable of dealing with high-dimensional data. It is based on a suitable ADMM splitting yielding computationally tractable subproblems resulting, in turn, in univariate Potts problems. These univariate problems can be solved fast and exactly using dynamic programming. The algorithm is even applicable to multispectral data which are too high dimensional to be processed by the state-of-the-art.

We furthermore developed a fast algorithm for the minimization of univariate Potts functionals with L1 data term. This algorithm has the same complexity as the fastest algorithm for the corresponding problem with L2 data term. Analytically, we obtained the convergence of the discrete L1 Potts functionals towards a continuous model as the discretization gets finer. This implies a corresponding statement for the respective minimizers. Furthermore, we revealed an interesting deconvolution property which we could also underpin theoretically.


recPotts_7angles Reconstruction from 7 projections; FBP type approach vs. our method.

Potts functionals for indirectly measured data. The data are often not only noisy, but also indirectly measured. The measurement devise is typically modelled by a potentially nonlinear operator F. In such cases, we consider data terms based on an Lp distance between F(u), where u is the target variable, and the data f. These problems are NP hard even in the univariate setting.

For univariate data, we developed an ADMM based algorithm processing indirectly measured data. The algorithmus is generic in the sense that, in order to apply it to a concrete modality, one only has to plug in a solver for the corresponding classical Tikhonov problem which is already available for many measurement devises. This makes the developed algorithm widely applicable. We also applied the algorithm to sparsity problems (compressed sensing). For the multivariate case, the naive generalization of the univariate approach does not work. Instead, one has to find a suitable way of splitting. Having done so, one obtains the same general applicability to a huge class of problems including ill-posed medical imaging problems such as computed tomography, photoacoustic tomography and positron emission tomography. In particular, we were able to reconstruct the Shepp-Logan head phantom from seven (equidistant) projections only.

Potts, Mumford-Shah and TV functionals for manifold.valued data. Often data are manifold-valued; frequently, this is due to constraints imposed by the model; an example are matrices A with the smooth constraint det A = 1 which amounts to considering SL(n)-valued data. For manifold-valued data, Potts and Mumford-Shah models are defined by replacing the Euclidean distance by the one induced by the Riemannian metric.


Mumford-Shah based segmentation of the corpus callosum on real data.

We have developed algorithms for Potts and Mumford-Shah funktionals for manifold-valued data. We have applied them to diffusion tensor imaging (DTI) as well as Q-ball imaging (QBI). For Cartan-Hadamard manifolds, we shown the existence of minimizers. In theses spaces, we have furthermore proven, that the proposed algorithms converge towards global minimizer in the univariate case.

In this context, we encountered the problem of Lp-TV minimization for manifold-valued data. Interestingly, there were no algorithms able to deal with higher dimensional manifolds. We developed such algorithms based on cyclic and parallel proximal point algorithms and applied them to different manifolds such as spheres, rotation groups,\ldots, which appear in practical applications (SAR data, nonlinear color spaces, DTI, \ldots). Again for Cartan-Hada\-mard manifolds, we could show that our algorithm converges (also in the multivariate case) towards a global minimizer of the (geodesic) convex Lp-TV functional for any starting point.

In order to milden staircasing effects, one frequently considers TV type funktionals of higher order, e.g., incorporating second order differences. For manifold data the question how to define second differences arises. We started with the sphere S1. We found a lifting based definition which allowed us to derive a correspond cyclic proximal point algorithm also in this case. Wir showed convergence for data which are locally sufficiently near. Applications were SAR imaging and „habituation data“.

Subdivision schemes and multiscale transforms in nonlinear geometries

Subdivision schemes are used in the area of computer graphics, geometric modeling and geometry processing as well as in scientific computing and in harmonic analysis. In particular, in wavelet analysis, the limit functions of subdivision serve as scaling functions of MRAs. Given data on a discrete grid or a mesh, a subdivision scheme produces an (typically smoother) output function on a refined grid or a refined mesh. Iterating this process produces a sequence of functions defined on increasingly finer grids. This raises the question weather the process convergences to some limit, weather this limit is smooth and what kind of approximation properties it has.

1st subdivision round

Data in the space S2 x Pos3.

1st subdivision round

After one subdivision round.

2nd round

After two subdivision rounds.

3rd round

After three subdivision rounds.

Subdivision schemes in nonlinear geometries. I am interested in subdivision schemes for data with values in manifold M, i.e., data f: V → M. Here I in particular consider the case of irregular combinatorics V. In this context, irregular means that the combinatoric is not even locally the combinatorics of the canonical lattice in the plane. Irregular combinatorics naturally appear in connection with domains of non-zero Euler characteristics: for example, if f approximates a function defined on the sphere S2, the underlying complex should be homeomorphic to S2 which mean that the combinatorics must be irregular. Examples of geometric data are poses of airplanes (Euclidean motion group) or diffusion tensor images (positive matrices). I have defined a new class of geometric subdivision schemes based on intrinsic means which distinguish them from other by keeping a lot of symmetries. From an analytic point of view, I showed C1 smoothness for geometric nonlinear schemes for irregular combinatorics. The strength of the results is comparable with the linear case. I also obtained smoothness results for nonlinear schemes with general dilation matrices for regular as well as irregular combinatorics.

Multiscale transforms in nonlinear geometries. The geometric multiscale transforms introduced by D. Donoho an his collegues employ an interpolatory geometric subdivision scheme as a prediction operator, i.e., the data sitting at, say, the odd integers is estimated by a subdivision operator only seeing the even integer data. The difference (measured in a suitable tangent space) between the estimate and the real data gives the detail at this level of resolution. I defined a related transform for data defined on irregular combinatorics. I was able to characterize the membership in Lipschitz repectively Hölder-Zygmund classes by the coefficient decay with respect to this multiscale transform.


Harmonic analysis of Jacobi translation operators

Visualization of the kernel for the Legendre translation.

Generalized translations appear in the harmonic analysis of symmetric or homogeneous spaces. They are relevant for many stochastic questions involving the convolution of measures or distributions on such spaces. Symmetric and homogeneous spaces are interesting from an application point of view since they are the typical data spaces when dealing with manifold valued data. The major difference to the intuitively well known translation in Euclidean space is that the translated functions are not local in the sense that, at given a point, they only depend on the original function at a corresponding point (respectively, its neighborhood when dealing with Lebesgue spaces.) Instead, it is an average in a certain non-local non-smooth sense. This makes the analysis more involved.

My work in this area deals with the respective homogeneous Banach spaces. These are subspaces invariant under Jacobi translation. In particular, I am interested in Jacobi Lipschitz spaces LipB(α) formed with respect to a (general) homogeneous Banach space B. A function f belongs to the corresponding Lipschitz space whenever the distance between f and its Jacobi translation Txf decays with at least the same speed as (1-x)α when x approaches the neutral element 1. If this decay is faster than (1-x)α, then f belongs to the little Lipschitz space lipB(α). These spaces are related to the spaces of best approximation with respect to the polynomials on the closed interval. My most important contribution to this area is that the bidual of the little Lipschitz space lipB(α) (in the sense of Banach spaces) is isometrically isomorphic to the corresponding big Lipschitz space LipB(α). Furthermore, I could show that these Lipschitz spaces are quasi Banach algebras with respect to pointwise multiplication whenever B has this property.