Recommended Emacs package: cmake-ide

Using emacs with flycheck and company is great experience, but what bothers me is that I need to carefully set many variables so that configuration used by these helper tools matches my actual build configuration – e.g. include directories, extra compiler and linker flags etc. I need to apply these settings per-project, and keep updating them whenever I change my build config in any way. This is particularly inconvenient for projects that feature configurable build process.

I use CMake for most of my projects, and I’ve recently found a package that can utilize it to automatically configure many other Emacs packages. The package is called cmake-ide, and it is available on MELPA.

There is literally zero configuration required. It automatically discovers whether a file you are editing belongs to a CMake project, runs CMake to prepare an out-of-tree build, and investigates compile_commands.json generated by CMake to figure out the precise build config for each file. It then uses this information to set up irony, flycheck, rtags, company-clang, and probably some other packages too. Whenever build config might change, cmake-ide will automatically update everything.

Super convenient.

Advertisement

Coffee Maker 360 Render

Rendered with RGKrt http://cielak.org/phile/software/rgkrt

Original render was 800×1000, so it is best viewed fullscreen with HD enabled.

Coffee Maker model by cekuhnen: http://www.blendswap.com/user/cekuhnen, used under the terms of CC-BY-3.0 license.

My raytracer

It took me quite a while to write about it, but during the last semester I attended a photorealistic graphics course, and made a nice raytracer.

Dragon Sponza render

See the full animation here, or play around with an interactive 360 view here.

More examples an a full list of features are displayed on my website. The source code is available in a GitHub repository.

Exploring Neural Networks, p. 3

(see part 2 here)

So eventually I got to analyse how training mini-batch size will affect a network that uses Batch Normalisation. There are several factors in play here:

  • Larger batch size is good for normalisation – the more samples we normalize over, the closer the estimation is. In effect, a large mini-batch size should cause the estimations to vary less between each mini-batch.
  • Smaller batch size results in more precise stochastic gradient descent steps, which may increase learning speed and final success rate.
  • It is computationally cheaper to process large batches, because of the parallel nature of modern hardware (especially GPU units).

Supposedly there might be a optimal mini-batch size for Batch normalisation. In order to find it, I tested the same network again using various mini-batch sizes, observed its performance, averaged results from multiple runs, and plotted results.

Read the rest of this entry »

Exploring Neural Networks, p. 2

(see part 1 here)

Once I fixed all my bugs in Batch Normalisation implementation and fine-tuned all parameters, I started getting reasonable results. In particular, it turned out that I needed to significantly (more than 10 times) increase weight decay ratio constant. I also had to modify learning rate scheduling so that it decays much faster, this makes sense, because Batch Normalisation is supposed to speed up learning. Eventually, the network:

3 channels ->  64 3x3 convolutions -> 3x3 maxpool -> BN -> ReLU
           -> 128 3x3 convolutions -> 2x2 maxpool -> BN -> ReLU
           -> 1024 to 1024 product ->                BN -> ReLU
           -> 1024 to  512 product ->                BN -> Sigmoid
           ->  512 to   10 product
           ->  SoftMax

has achieved 79% success rate on the test set.

I was interested in the advantage of using BN. To investigate it, I created another network, which is an identical clone of the one described above, but no Batch Normalisations are performed at all. Comparing the results of these two networks should express the gain introduced by using BN.

Read the rest of this entry »

Exploring Neural Networks, p. 1

As a final assignment on the Neural Networks course I took part in (University of Wrocław, Institute of Computer Science, winter2015/2016), I am tasked with designing, implementing and training a neural net that would classify CIFAR-10 images with some reasonable success rate. I am also encouraged to experiment with the network by implementing some of the recent inventions that may, in one way or another, improve my network’s performance. I will be sharing my results and observations here, in this post, and in some that will follow soon within the next two weeks.

The source code I am using for my experiments is available at github. The sources come with a number of utilities that simplify running them on our lab’s computers, which may come in handy if you are a fellow student peeking at my progress, but if you are not, then you should ignore all files except the ones within ./project directory.

Read the rest of this entry »

QEMU – main-loop: WARNING: I/O thread spun for 1000 iterations

When upgrading the virtual machine I use, I stumbled upon an issue where the guest OS would hang every time when performing any kind of heavy hard drive I/O. Qemu monitor would only display:

main-loop: WARNING: I/O thread spun for 1000 iterations

Some digging led me to the following workaround:

diff -u a/vl.c b/vl.c
--- a/vl.c	2015-11-20 01:45:00.179169442 +0100
+++ b/vl.c	2015-11-20 01:44:22.181778840 +0100
@@ -1914,6 +1914,7 @@
 #endif
     do {
         nonblocking = !kvm_enabled() && !xen_enabled() && last_io > 0;
+        nonblocking = 0;
 #ifdef CONFIG_PROFILER
         ti = profile_getclock();
 #endif

For explanation of the nature of this issue, read this discussion.

Current progress on AlgAudio

… or “what I’ve been working on for the past three months”.

So this summer I have participated in a programming internship at Audiovisual Technology Center – CeTA in Wrocław. CeTA is developing a number of very exciting projects, and the one I had the pleasure to work on is AlgAudio.

screenshot

(download links available below)

AlgAudio is a new signal processing framework that we’ve been developing from scratch. The user builds an audio processing network by placing “building blocks” of simple operations, connecting them together, configuring their parameters, and defining how the parameters should influence each other. The network works in real time, so any changes to the parameters are immediately reflected in the outputted audio. This makes AlgAudio a perfect tool for live performances.

Read the rest of this entry »

Prevent full-screen games from minimizing when switching workspaces

When I play games on my Ubuntu desktop, I like to switch workspaces a lot. For example, when waiting for respawn I will quickly switch to a second workspace to select a different music track, or to write a quick reply on IM. What I find very inconvenient is that a lot of games, by default, will minimize when I switch workspace. Because of that, it takes me more time to return to game – a workspace switch short-cut, and then alt+tab.

It turns out that this is SDL feature, so all games build with SDL will behave this way. However, there is an easy, little known way to disable it. Simply set the following enviromental variable

export SDL_VIDEO_MINIMIZE_ON_FOCUS_LOSS=0

before starting your game. Or, if you dislike this feature as much as I do, you may want to set that variable in your .profile file, or maybe even /etc/environment.

Enjoy flawless workspace switching when gaming!

Sloped triangles tesselation

  • Model: Sloped triangles tessellation
  • Designed and folded by: Rafał Cieślak
    •  Inspired by Eric Joisel’s Hedgehog.
    • I have been later shown I was not the first to come up with such pattern [1] [2]. That’s not surprising, given how simple the molecule is.
  • Paper size: A4
  • Folding time: ~4h

Sloped Triangles TesselationSloped Triangles Tesselation

Sloped Triangles TesselationSloped Triangles Tesselation

Folded on 20 VI 2015