# Scientific Track

## Refined Algorithms for Infinitely Many-Armed Bandits with Deterministic Rewards

We consider a variant of the Multi-Armed Bandit problem which involves a large pool of a-priory identical arms (or items). Each arm is associated with a deterministic value, which is sampled from a probability distribution with unknown maximal value, and is revealed once that arm is chosen. At each time instant the agent may choose a new arm (with unknown value), or a previously-chosen arm whose value is already revealed. The goal is to minimize the cumulative regret relative to the best arm in the pool.

## Ising Bandits with Side Information

We develop an online learning algorithm for bandits on a graph with side information where there is an underlying Ising distribution over the vertices at low temperatures. We are motivated from practical settings where the graph state in a social or a computer hosts network (potentially) changes at every trial; intrinsically partitioning the graph thus requiring the learning algorithm to play the bandit from the current partition. Our algorithm essentially functions as a two stage process.

## Early Classification of Time Series as a Non Myopic Sequential Decision Making Problem

Classification of time series as early as possible is a most sought after goal. Indeed, in many application domains, the earliest the decision, the more rewarding it can be. Yet, often, gathering more information allows one to get a better decision. The optimization of this time vs. accuracy tradeoff must generally be solved online and is a complex problem.This paper presents a formal criterion that expresses this trade-off in all generality together with a generic sequential meta algorithm to solve it. This meta algorithm is interesting in two ways.

## Drift Detection using Stream Volatility

Current methods in data streams that detect concept drifts in the underlying distribution of data look at the distribution difference using statistical measures based on mean and variance. Existing methods are unable to proactively approximate the probability of a concept drift occurring and predict future drift points. We extend the current drift detection design by proposing the use of historical drift trends to estimate the probability of expecting a drift at different points across the stream, which we term the expected drift probability.

## Ageing-based Multinomial Naive Bayes Classifiers over Opinionated Data Streams

The long-term analysis of opinionated streams requires algorithms that predict the polarity of opinionated documents, while adapting to different forms of concept drift: the class distribution may change but also the vocabulary used by the document authors may change.One of the key properties of a stream classifier isadaptation to concept drifts and shifts;this is typically achieved through ageing of the data.Surprisingly, for one of the most popular classifiers, Multinomial Naive Bayes (MNB), no ageinghas been considered thus far.MNB is particularly appropriate for opinionated streams, becaus

## Unsupervised Feature Analysis with Class Margin Optimization

Unsupervised feature selection has been always attracting research attention in the communities of machine learning and data mining for decades. In this paper, we propose an unsupervised feature selection method seeking a feature coefficient matrix to select the most distinctive features. Specifically, our proposed algorithm integrates the Maximum Margin Criterion with a sparsity-based model into a joint framework, where the class margin and feature correlation are taken into account simultaneously.

## Multi-View Semantic Learning for Data Representation

Many real-world datasets are represented by multiple features or modalities which often provide compatible and complementary information to each other. In order to obtain a good data representation that synthesizes multiple features, researchers have proposed different multi-view subspace learning algorithms. Although label information has been exploited for guiding multi-view subspace learning, previous approaches either fail to directly capture the semantic relations between labeled items or unrealistically make Gaussian assumption about data distribution.

## Markov blanket discovery in positive-unlabelled and semi-supervised data

The importance of Markov blanket discovery algorithms is twofold: as the main building block in constraint-based structure learning of Bayesian network algorithms and as a technique to derive the optimal set of features in filter feature selection approaches. Equally, learning from partially labelled data is a crucial and demanding area of machine learning, and extending techniques from fully to partially supervised scenarios is a challenging problem.

## Solving a Hard Cutting Stock Problem by Machine Learning and Optimisation

We are working with a company on a hard industrial optimisation problem: a version of the well-known Cutting Stock Problem in which a paper mill must cut rolls of paper following certain cutting patterns to meet customer demands. In our problem each roll to be cut may have a different size, there are multiple mills, and the cutting patterns are continuous and semi-automated, so that we have only indirect control over them via a list of parameters. We solve it using a combination of machine learning and optimisation techniques.

## Multiple Incomplete Views Clustering via Weighted Nonnegative Matrix Factorization with L2,1 Regularization

With the advance of technology, data are often with multiple modalities or coming from multiple sources. Multi-view clustering provides a natural way for generating clusters from such data. Although multi-view clustering has been successfully applied in many applications, most of the previous methods assumed the completeness of each view (i.e., each instance appears in all views). However, in real-world applications, it is often the case that a number of views are available for learning but none of them is complete.

## Connect