# Scientific Track

## Bayesian Modelling of the Temporal Aspects of Smart Home Activity with Circular Statistics

Typically, when analysing patterns of activity in a smart home environment, the daily patterns of activity are either ignored completely or summarised into a high-level "hour-of-day" feature that is then combined with sensor activities. However, when summarising the temporal nature of an activity into a coarse feature such as this, not only is information lost after discretisation, but also the strength of the periodicity of the action is ignored.

## Superset Learning Based on Generalized Loss Minimization

In standard supervised learning, each training instance is associated with an outcome from a corresponding output space (e.g., a class label in classification or a real number in regression). In the superset learning problem, the outcome is only characterized in terms of a superset---a subset of candidates that covers the true outcome but may also contain additional ones. Thus, superset learning can be seen as a specific type of weakly supervised learning, in which training examples are ambiguous.

## Fast Training of Support Vector Machines for Survival Analysis

Survival analysis is a commonly used technique to identify important predictors of adverse events and develop guidelines for patient's treatment in medical research. When applied to large amounts of patient data, efficient optimization routines become a necessity. We propose efficient training algorithms for three kinds of linear survival support vector machines: 1) ranking-based, 2) regression-based, and 3) combined ranking and regression. We perform optimization in the primal using truncated Newton optimization and use order statistic trees to lower computational costs of training.

## Dyad Ranking Using a Bilinear Plackett-Luce Model

Label ranking is a specific type of preference learning problem, namely the problem of learning a model that maps instances to rankings over a finite set of predefined alternatives. These alternatives are identified by their name or label while not being characterized in terms of any properties or features that could be potentially useful for learning. In this paper, we consider a generalization of the label ranking problem that we call dyad ranking. In dyad ranking, not only the instances but also the alternatives are represented in terms of attributes.

## The Difference and The Norm -- Characterising Similarities and Differences between Databases

Suppose we are given a set of databases, such as sales records over different branches. How can we characterise the differences and the norm between these datasets? That is, what are the patterns that characterise the general distribution, and what are those that are important to describe the individual datasets? We study how to discover these pattern sets simultaneously and without redundancy -- automatically identifying those patterns that aid describing the overall distribution, as well as those pointing out those that are characteristic for specific databases.

## Swap Randomization of Bases of Sequences for Mining Satellite Image Times Series

Swap randomization has been shown to be an effective technique for assessing the significance of data mining results such as Boolean matrices, frequent itemsets, correlations, or clusterings.Basically, instead of applying statistical tests on selected attributes, the global structure of the actual dataset is taken into account by checking whether obtained results are likely or not to occur in randomized datasets whose column and row margins are equal to the ones of the actual dataset.In this paper, a swap randomization approach for bases of sequences is proposed with the aim of assessing seque

## Non-Parametric Jensen-Shannon Divergence

Quantifying the difference between two distributions is a common problem in many machine learning and data mining tasks. What is also common in many tasks is that we only have empirical data. That is, we do not know the true distributions nor their form, and hence, before we can measure their divergence we first need to assume a distribution or perform estimation. For exploratory purposes this is unsatisfactory, as we want to explore the data, not our expectations.In this paper we study how to non-parametrically measure the divergence between two distributions.

## Fast Generation of Best Interval Patterns for Nonmonotonic Constraints

In pattern mining, the main challenge is the exponential explosion of the set of patterns. Typically, to solve this problem, a constraint for pattern selection is introduced. One of the first constraints proposed in pattern mining is support (frequency) of a pattern in a dataset. Frequency is an anti-monotonic function, i.e., given an infrequent pattern, all its superpatterns are not frequent.

## Opening the Black Box: Revealing Interpretable Sequence Motifs in Kernel-based Learning Algorithms

This work is in the context of kernel-based learning algorithms for sequence data. We present a probabilistic approach to automatically extract, from the output of such string-kernel-based learning algorithms, the subsequences---or motifs---truly underlying the machine's predictions. The proposed framework views motifs as free parameters in a probabilistic model, which is solved through a global optimization approach.

## Multi-Task Learning with Group-Specific Feature Space Sharing

When faced with learning a set of inter-related tasks from a limitedamount of usable data, learning each task independently may lead to poor generalizationperformance. Multi-Task Learning (MTL) exploits the latent relationsbetween tasks and overcomes data scarcity limitations by co-learning all thesetasks simultaneously to offer improved performance. We propose a novel Multi-Task Multiple Kernel Learning framework based on Support Vector Machines forbinary classification tasks.

## Connect