Coffee Maker 360 Render

Rendered with RGKrt http://cielak.org/phile/software/rgkrt

Original render was 800×1000, so it is best viewed fullscreen with HD enabled.

Coffee Maker model by cekuhnen: http://www.blendswap.com/user/cekuhnen, used under the terms of CC-BY-3.0 license.

My raytracer

It took me quite a while to write about it, but during the last semester I attended a photorealistic graphics course, and made a nice raytracer.

Dragon Sponza render

See the full animation here, or play around with an interactive 360 view here.

More examples an a full list of features are displayed on my website. The source code is available in a GitHub repository.

Exploring Neural Networks, p. 3

(see part 2 here)

So eventually I got to analyse how training mini-batch size will affect a network that uses Batch Normalisation. There are several factors in play here:

  • Larger batch size is good for normalisation – the more samples we normalize over, the closer the estimation is. In effect, a large mini-batch size should cause the estimations to vary less between each mini-batch.
  • Smaller batch size results in more precise stochastic gradient descent steps, which may increase learning speed and final success rate.
  • It is computationally cheaper to process large batches, because of the parallel nature of modern hardware (especially GPU units).

Supposedly there might be a optimal mini-batch size for Batch normalisation. In order to find it, I tested the same network again using various mini-batch sizes, observed its performance, averaged results from multiple runs, and plotted results.

Read the rest of this entry »

Exploring Neural Networks, p. 2

(see part 1 here)

Once I fixed all my bugs in Batch Normalisation implementation and fine-tuned all parameters, I started getting reasonable results. In particular, it turned out that I needed to significantly (more than 10 times) increase weight decay ratio constant. I also had to modify learning rate scheduling so that it decays much faster, this makes sense, because Batch Normalisation is supposed to speed up learning. Eventually, the network:

3 channels ->  64 3x3 convolutions -> 3x3 maxpool -> BN -> ReLU
           -> 128 3x3 convolutions -> 2x2 maxpool -> BN -> ReLU
           -> 1024 to 1024 product ->                BN -> ReLU
           -> 1024 to  512 product ->                BN -> Sigmoid
           ->  512 to   10 product
           ->  SoftMax

has achieved 79% success rate on the test set.

I was interested in the advantage of using BN. To investigate it, I created another network, which is an identical clone of the one described above, but no Batch Normalisations are performed at all. Comparing the results of these two networks should express the gain introduced by using BN.

Read the rest of this entry »

Exploring Neural Networks, p. 1

As a final assignment on the Neural Networks course I took part in (University of Wrocław, Institute of Computer Science, winter2015/2016), I am tasked with designing, implementing and training a neural net that would classify CIFAR-10 images with some reasonable success rate. I am also encouraged to experiment with the network by implementing some of the recent inventions that may, in one way or another, improve my network’s performance. I will be sharing my results and observations here, in this post, and in some that will follow soon within the next two weeks.

The source code I am using for my experiments is available at github. The sources come with a number of utilities that simplify running them on our lab’s computers, which may come in handy if you are a fellow student peeking at my progress, but if you are not, then you should ignore all files except the ones within ./project directory.

Read the rest of this entry »

QEMU – main-loop: WARNING: I/O thread spun for 1000 iterations

When upgrading the virtual machine I use, I stumbled upon an issue where the guest OS would hang every time when performing any kind of heavy hard drive I/O. Qemu monitor would only display:

main-loop: WARNING: I/O thread spun for 1000 iterations

Some digging led me to the following workaround:

diff -u a/vl.c b/vl.c
--- a/vl.c	2015-11-20 01:45:00.179169442 +0100
+++ b/vl.c	2015-11-20 01:44:22.181778840 +0100
@@ -1914,6 +1914,7 @@
 #endif
     do {
         nonblocking = !kvm_enabled() && !xen_enabled() && last_io > 0;
+        nonblocking = 0;
 #ifdef CONFIG_PROFILER
         ti = profile_getclock();
 #endif

For explanation of the nature of this issue, read this discussion.

Current progress on AlgAudio

… or “what I’ve been working on for the past three months”.

So this summer I have participated in a programming internship at Audiovisual Technology Center – CeTA in Wrocław. CeTA is developing a number of very exciting projects, and the one I had the pleasure to work on is AlgAudio.

screenshot

(download links available below)

AlgAudio is a new signal processing framework that we’ve been developing from scratch. The user builds an audio processing network by placing “building blocks” of simple operations, connecting them together, configuring their parameters, and defining how the parameters should influence each other. The network works in real time, so any changes to the parameters are immediately reflected in the outputted audio. This makes AlgAudio a perfect tool for live performances.

Read the rest of this entry »