#### 1. Towards a flexible component-based architecture of machine learning
software

As most of us conduct research in machine learning / data mining, we
frequently need to implement and test some data processing algorithms.
Although there are many supporting tools, like RSES/Rseslib, WEKA, Matlab,
R, SNNS etc., this problem is still difficult and most of time devoted to
research is took up by software implementation. Speed of implementation
process may be several times larger or smaller, depending mostly on software
architecture. Therefore, the problem of which architecture to choose is
crucial for time-efficiency of our research work.

In this talk I will propose an architecture which in my view will be more
flexible and easier to use than existing ones. This architecture comes from
many-year experience in implementing different machine learning algorithms.
Moreover, it became the framework for the algorithms that I am implementing
now - related to neural networks and computer vision - so it is already
partially verified in real-world implementation.

#### 2. Nondeterministic discretization of weights improves accuracy of neural
networks

Neural networks are well-established tools in machine learning, with proven
effectiveness in many real-world problems. However, there are still many
tasks in which they perform worse than other systems. One of the reasons is
that neural networks contain thousands of real-valued adaptive parameters,
and so they have strong tendency to get overtrained (overfitted). Thus,
methods to improve their generalization abilities are necessary.

In my talk I will present a new method, based on nondeterministic
discretization of neural weights. The method is easy to implement, yet it
can lead to significant improvement in accuracy of the network.

What is the most interesting, this algorithm shows also how methods of
continuous optimization - like gradient descend - can be successfully
applied to optimization over discontinuous (e.g. discrete) spaces.