October 01, 2014

Wikimedia Suomi

GLAMs and GLAMWiki Toolset

GLAMWiki Toolset project is a collaboration between various Wikimedia chapters and Europeana. The goal of the project is to provide easy-to-use tools to make batch uploads of GLAM (Galleries, Libraries, Archives & Museums) content to Wikimedia Commons. Wikimedia Finland invited the senior developer of the project, Dan Entous, to Helsinki to hold a GW Toolset workshop for the representatives of GLAMs and staff of Wikimedia Finland on 10th September. The workshop was first of its kind outside Netherlands.

GLAMWikiToolset training in Helsinki.

GLAMWikiToolset training in Helsinki. Photo: Teemu Perhiö. CC-BY

I took part in the workshop in the role of tech assistant of Wikimedia Finland. After the workshop I have been trying to figure out what is needed for using the toolset from a GLAM perspective. In this text I’m concentrating on the technical side of these requirements.

What is needed for GWToolset?

From a technical point of view, the use of GWToolset can be split into three sections. First there are things that must be done before using the toolset. The GWToolset requires metadata as a XML file that is structured in a certain way. The image files must also be addressable by direct URLs and the domain name of the image server must be added to the upload whitelist in Commons.

The second section concerns practices in Wikimedia Commons itself. This means getting to know the templates, such as institution, photograph, artwork and other templates, as well as finding the categories that are suitable for uploaded material. For someone who is not a Wikipedian – like myself – it takes a while to get know the templates and especially the categories.

The third section is actually making the uploads by using the toolset itself, which I find easy to use. It has a clear workflow and with little assistance there should be no problems for GLAMs using it. Besides, there is a sandbox called Commons Beta where one can rehearse before going public.

I believe that the bottleneck for GLAMs is the first section: things that must be done before using the toolset. More precisely, creating a valid XML file for the toolset. Of course, if an organisation has a competent IT department with resources to work with material donations to Wikimedia Commons, then there is no problem. However, this could be a problem for smaller – and less resourceful – organisations.

Converting metadata in practise

Like I said, the GWToolset requires an XML file with a certain structure. As far as I know, there is no information system that could directly produce such a file. However, most of the systems are able to export metadata in XML format. Even though the exported file is not valid for GWToolset, it can be converted into such with XSLT.

XSLT is designed to this specific task and it has a very powerful template mechanism for XML handling. This means that the amount of code stays minimal compared to any other options. The good news is that XML transformations are relatively easy to do.

XSLT is our friend when it comes to XML manipulation.

XSLT is our friend when it comes to XML manipulation.

In order to learn what is needed for such transforms with real data, I made couple of practical demos. I wanted to create a very lightweight solution for transforming the metadata sets for the GWToolset. Modern web browsers are flexible application platforms and for example web-scraping can be done easily through Javascript.

A browser-based solution has many advantages. The first is that every Internet user already has a browser. So there is no downloading, installing or configuring needed. The second advantage is that browser-based applications that use external datasets do not create traffic to the server where the application is hosted. Browsers can also be used locally. This allows organisations to download the page files, modify them, make conversions locally in-house, and have their materials on Wikimedia Commons.

XSLT requires of course a platform to run. There is a javascript library called Saxon-CE that provides the platform for browsers. So, a web browser offers all that is needed for metadata conversions: web scraping, XML handling and conversions through XSLT, and user interface components. Of course XSLT files can also be run in any other XSLT environment, like xsltproc.


Blenda and Hugo Simberg, 1896. source: The National Gallery of Finland

Blenda and Hugo Simberg, 1896. source: The National Gallery of Finland, CC BY 4.0

The first demo I created uses an open data image set published by the Finnish National Gallery. It consists of about one thousand digitised negatives of and by Finnish artist Hugo Simberg. The set also includes digitally created positives of images. The metadata is provided as a single XML file.

The conversion in this case is quite simple, since the original XML file is flat (i.e. there are no nested elements). Basically the original data is passed through as it is with few exceptions.  The “image” element in original metadata includes only an image id, which must be expanded to a full URL. I used a dummy domain name here, since images are available as a zip-file and therefore cannot be addressed individually. Another exception is the “keeper” element, which holds the name of the owner organisation. This was changed from the Finnish name of the National Gallery to a name that corresponds to their institutional template name in Wikimedia Commons.

example record:
source metadata:
conversion demo:
direct link to the XSLT:

Photo: Signe Brander. source: Helsinki City Museum, CC BY-ND 4.0

Photo: Signe Brander. source: Helsinki City Museum, CC BY-ND 4.0

In the second demo I used the materials provided by the Helsinki City Museum. Their materials in Finna are licensed with CC-BY-ND 4.0. Finna is an “information search service that brings together the collections of Finnish archives, libraries and museums”. Currently there is no API to Finna. Finna provides metadata in LIDO format but there is no direct URL to the LIDO file. However, LIDO can be extracted from the HTML.

The LIDO format is a deep format, so the conversion is mostly picking the elements from the LIDO file and placing them in a flat XML file. For example, the name of the author in LIDO is in a quite deep structure.

example LIDO record:
source metadata:
conversion demo:
(Please note that the demo requires that the same-origin-policy restrictions are loosened in the browser. The simplest way to do this is to use Google Chrome by starting it with a switch “disable-web-security”. In Linux that would be: google-chrome — disable-web-security and Mac (sorry, I can not test this) open -a Google\ Chrome –args –disable-web-security. For Firefox see this:
direct link to the XSLT:


These demos are just examples, no actual data has yet been uploaded to Wikimedia Commons. The aim is to show that XML conversions needed for GWToolset are relatively simple and that in order to use GWToolset the organisation does not have to have an army of IT-engineers.

The demos could be certainly better. For example, the author name must be changed to reflect the author name in Wikimedia Commons. But again, that is just a few lines in XSLT and that is done.

by Ari Häyrinen at October 01, 2014 07:22 AM

September 28, 2014

Viikon VALO

4x40 Reveal.js - Viikon VALO #196

Reveal.js on JavaScript-työkalu näyttävien html5-pohjaisten esitysten tekemiseen.
valo196-revealjs.png Reveal.js on JavaScript-kirjasto, joka muodostaa html5-tiedostona luodusta esitysmateriaalista näyttävän esityksen. Esitys on näytettävissä ja selattavissa nykyaikaisella www-selaimella. Reveal.js sisältää muun muassa hienot esityskalvojen väliset siirtymät, alasivut sekä esittäjän muistiinpanot. Järjestelmä on laajennettavissa lisäosilla, jotka mahdollistavat esimerkiksi matematiikan sekä syntaksikorostetun ohjelmakoodin näyttämisen. Html5-sovelluksena esitykseen voi sisällyttää myös esimerkiksi ääntä ja videoita audio- ja video-elementeillä. Paketti sisältää muutaman valmiin teeman ja omia voi tehdä osaamisen mukaan.

Reveal.js:n käyttäminen suoraan html-tiedostoa muokkaamalla vaatii käyttäjältä hieman uskallusta lähteä muokkaamaan valmista html-pohjaa. Yksittäiset kalvot järjestelmässä kirjoitetaan html5:n section-elementteinä ja kalvojen sisältö on myös html-kieltä. Halutessaan käyttäjä voi kuitenkin käyttää myös yksinkertaisempaa Markdown-merkintäkieltä sisällön kirjoittamiseen.

Perusasennuksena ohjelman Github-sivulta ladataan zip-paketti, jonka sisältämää index.html-tiedostoa voi muokata haluamansa kaltaiseksi. Täydessä asennuksessa käytetään lisäksi Node.js-ohjelmistoa toimimaan paikallisena palvelimena, johon www-selaimella otetaan yhteys. Täyden asennuksen etuna perusasennukseen ovat muutamat lisäominaisuudet, joita voidaan käyttää vain palvelinyhteyden kautta. Näitä ovat muun muassa Markdown-muotoisen sisällön käyttäminen ulkoisista tiedostoista sekä puhujan muistiinpanonäkymä. Muistiinpanonäkymä on erillinen ikkuna, joka voi olla näkyvissä puhujan käyttämän tietokoneen omalla näytöllä pääikkunan ollessa näkyvissä valkokankaalla. Palvelinasennusta käytettäessä puhujan muistiinpanot saadaan näkyviin pop-up-ikkunana painamalla 's'-näppäintä. Puhujan muistiinpanonäkymässä näkyy pääikkunassa näkyvä kalvo ja siihen liittyvät muistiinpanot sekä esikatselunäkymä seuraavasta kalvosta. Lisäksi näkymässä on näkyvissä kello ja kulunutta aikaa näyttävä laskuri.

Reveal.js-esitykset ovat katsottavissa myös mobiililaitteilla ja esimerkiksi kalvon vaihtaminen iPadilla hoituu pyyhkäisemällä kosketusnäyttöä.

Reveal.js sisältää oletuksena lisäosat ainakin Markdown-sisällölle, matematiikan näyttämiseen MathJax-työkalulla, esiintyjän muistiinpanonäkymän sekä ohjelmakoodin esittämisen syntaksikorostuksella.

Niille, jotka eivät halua rakentaa esityksiään kirjoittamalla html-kieltä tekstieditorilla, on tarjolla visuaalinen käyttöliittymä verkkopalveluna osoitteessa . Palvelu tarjoaa eri hintaisia ja erilaisia tallennusmahdollisuuksia antavia paketteja. Ilmainen paketti sisältää hieman levytilaa ja mahdollistaa vain julkisesti näkyvien esitysten tekemisen. Työkalulla tehdyt esitykset (html-tiedostot) ovat kuitenkin ladattavissa sivustolta myös omalle koneelle ja käytettävissä normaalin reveal.js-paketin kanssa.

Kotisivu (Lataus ja ohjeet) (Demo-esitys)
Toimii seuraavilla alustoilla
Tarvittavat paketit löytyvät Reveal.js:n Github-sivulta. Työkalua voi käyttää joko perusasennuksella html-tiedostosta tai täydellä asennuksella Node.js-palvelimen kautta.
Reveal.js:n omat ohjeet Githubissa.
Tutoriaali aloittelijoille

Teksti: Pesasa
Kuvakaappaukset: Pesasa

by pesasa at September 28, 2014 07:57 PM

4x39 Subtitle Editor - Viikon VALO #195

Subtitle Editor on vapaa työkalu videoiden tekstitysten tekemiseen ja kääntämiseen.

Subtitle Editor on ohjelma, jolla voi luoda ja muokata videotiedostoihin liittyviä tekstitystiedostoja. Tällaisia ovat esimerkiksi paljon käytetyt SubRip-tekstitykset, eli srt-päätteiset tekstitystiedostot. Ohjelma osaa käsitellä monessa eri tiedostomuodossa tallennettua tekstimuotoista tekstitystä. Tekstitystiedostot sisältävät tyypillisesti tiedon näytettävistä teksteistä, näyttämisen aloitusajan millisekunteina sekä näytettävän tekstityksen keston. Monet videotoistimet, kuten VLC-mediasoitin, osaavat näyttää erillisessä tiedostossa tallennettuja tekstityksiä videotiedostojen yhteydessä. Useimpia tekstimuotoisia tekstitystiedostoja on mahdollista muokata suoraan tekstinä, mutta hyvällä ja havainnollisella työkalulla muokkaaminen on vaivattomampaa. Subtitle Editor sisältää muun muassa työkalut tekstitysten muokkaamiseen, tekstitetyn videon esikatselutoiminnon sekä aikajanamaisen näkymän videon ääniraitaan.

Ohjelman muokkausnäkymä voi olla joko ajoitustilassa, jossa näkyvissä ovat tekstien alku- ja loppuajat sekä kesto, taikka käännösnäkymässä, jossa voi kääntää jo oikein ajoitettua tekstitystä toiselle kielelle. Ajoitusnäkymässä valittujen tekstitysten aloitusaikaa ja kestoa voi muokata valikoiden monipuolisilla toiminnoilla. Useimmin käytetyille valikoista saataville toiminnoille on helppoa määritellä omia pikanäppäimiä viemällä hiiren osoitin niiden päälle ja painamalla haluttua pikanäppäintä.

Videonäkymään voi avata halutun videotiedoston, esimerkiksi elokuvan tai tv-sarjan jakson, johon halutaan luoda tekstitystä. Videonäkymä näyttää muokattavan tekstityksen ajoituksen mukaisesti videon päällä.

Aaltomuotonäkymässä (Waveform) näytetään siihen avatun äänitiedoston, joko videon ääniraidan tai jonkin ulkoisen tiedoston, aaltomuotoinen esitys aikajanana. Aikajanan päälle on merkitty kukin muokkausnäkymässä näkyvä tekstitys omana lohkonaan. Lohkoja voi hiirellä vetämällä siirrellä ja venytellä halutun kokoiseksi. Näin tekstejä ei tarvitse sijoitella vain korvakuulolla vaan voi hyödyntää myös ääniraidan aaltomuodossa näkyviä vaihteluita. Aaltomuotonäkymää voi zoomata ja skrollata mieleisekseen.

Kun videota toistetaan esikatselunäkymässä, aaltomuotonäkymä seuraa äänen toiston etenemistä näyttäen koko ajan, missä kohtaa ja minkä tekstityksen kohdalla mennään.

Subtitle Editor osaa hyödyntää käyttöjärjestelmään asennettua oikolukutyökalua, esimerkiksi Voikkoa, ja huomauttaa kirjoitusvirheistä. Virheidentarkistustyökalu näyttää kootusti kaikki sen löytämät tekstitysten ajoitukseen liittyvät virheet, kuten tekstitysten päällekkäisyydet taikka kestoltaan määriteltyjä raja-arvoja lyhyemmät tai pidemmät tekstitykset. Automaattinen korjaustoiminto osaa myös korjata suurimman osan näistä virheistä lähinnä kai säätämällä tekstitysten kestoja.

Tekstityksille voi myös lisäillä erilaisia tyylittelyitä, kuten värejä, riippuen käytettävästä tallennusmuodosta. Kannattaa huomioida, että kaikki tallennusmuodot ja videosoittimet eivät välttämättä tue tyylittelyitä.

Ohjelma tukee ainakin seuraavia tiedostomuotoja (tiedostopääte sulkeissa):
  • Adobe Encore DVD (NTSC) (txt)
  • Adobe Encore DVD (PAL) (txt)
  • Advanced Sub Station Alpha (ass)
  • BITC (Burnt-in timecode) (txt)
  • DCSubtitle (xml)
  • MicroDVD (sub)
  • MPL2 (txt)
  • MPsub (sub)
  • Plain Text Format (txt)
  • Sami (smi)
  • SBV (sbv)
  • Spruce STL (stl)
  • SubRip (srt)
  • Sub Station Alpha (ssa)
  • Subtitle Editor Project (xml)
  • SubViewer 2.0 (sub)
  • Timed Text Authoring Format 1.0 (xml)
Toimii seuraavilla alustoilla
Linux, FreeBSD, OpenBSD, NetBSD
Ohjelma on ladattavissa sen kotisivuilta. Linux-jakeluihin se löytyy todennäköisesti jakelun omasta pakettivarastosta.
Subtitling with Linux Tutorial

Teksti: Pesasa
Kuvakaappaukset: Pesasa

by pesasa at September 28, 2014 04:05 PM

September 25, 2014

Wikimedia Suomi

Avointa Suomea rakentamassa

Avoin Suomi 2014, 15.-16.9.2014. Kuva: Kimmo Virtanen. CC-BY.

Avoin Suomi 2014 -tapahtuma keräsi Helsingin Wanhaan Satamaan paljon erilaisia avoimen tiedon ja datan toimijoita. Wikimedia Suomi osallistui tapahtumaan näytteilleasettajana yhteisellä osastolla AvoinGLAM-verkoston kanssa. Ständillä esiteltiin Wikimedian toimintaa eri näkökulmista. GLAM-toimintaa edustivat myös Avoimen kulttuuridatan mestarikurssilla vapaaseen käyttöön avatut aineistot. Lisäksi Wikimedia osallistui eOppimiskeskuksen messuosastolle.

Avoin Suomi -tapahtuman päätarkoituksena oli esitellä erilaisia avoimen datan hankkeita ja rohkaista viranomaisia avaamaan tietovarantojaan. Avoin tieto koetaan Suomen valtion tasolta selvästi tärkeäksi. Tätä havainnollistaa se, että messujen järjestäjä oli valtioneuvoston kanslia, ja avauspuheen piti pääministeri Alexander Stubb.

Mitä Wikimedia sitten voi tarjota julkisen sektorin organisaatioille? Wikimedia tekee avointa tietoa käytännön tasolla. Wikimedian projektit Wikipedia ja mediatiedostojen jakoon tarkoitettu Commons ovat valmiiksi tunnettuja kansainvälisiä ja monikielisiä alustoja. Alustojen avulla sekä erilaiset kulttuuriorganisaatiot että hallintoviranomaiset voivat avata ja linkittää omia tietovarantojaan. Wikimedia on voittoa tavoittelematon järjestö, ja sen sivustot ovat maksuttomia ja mainoksista vapaita. Tänä syksynä Wikimedia Suomi järjestää Tuo kulttuuri Wikipediaan-koulutusta yhteistyössä kulttuuriorganisaatioiden kanssa.

Wikidata on uusi tapa avata koneluettavaa dataa vapaaseen käyttöön. Wikidatasta on tulossa kattava viitetietokanta, joka sisältää Wikipediaan sisältyvät aiheet. Julkishallinnon ja tutkijoiden olisi hyödyllistä käyttää sitä viitteenä. Wikidataa tullaan käyttämään alustana esimerkiksi Britanniassa ContentMine-hankkeessa, jossa tieteellisestä kirjallisuudesta louhitaan dataa vapaaseen käyttöön. Syksyllä Wikimedia Suomi järjestää Helsingissä Wikidata-koulutuksen, josta kiinnostuneita pyydämme ilmaisemaan kiinnostuksensa täällä.

Historialliset kartat ovat erinomainen esimerkki siitä, kuinka julkisen sektorin organisaatiot voivat työskennellä yhteistyössä voittoa tavoittelemattomien järjestöjen kanssa. Wikimaps on Wikimedia Suomen hanke, jossa tarkoituksena on kerätä Wikimedia Commonsiin vanhoja karttoja, vapaaehtoisvoimin sijoittaa ne koordinaatistoon ja hyödyntää niitä eri tavoin. Avoin Suomi -messuilla Wikimedian lisäksi vanhojen karttojen käyttöä esittelivät esimerkiksi Helsinki Region Infoshare ja Maanmittauslaitos, joilla molemmilla on paljon sekä historiallisia karttoja että muuta paikkatietoaineistoa.

Wikimedian osasto tapahtumassa. Kuva: Kimmo Virtanen. CC-BY.

Wikimedian osasto tapahtumassa. Kuva: Kimmo Virtanen. CC-BY.

Messuilla korostui toivomus, että tiedon digitalisoituminen ja hallinnon datan avaaminen johtaisivat uusiin yrityksiin ja sitä kautta talouskasvuun. Tapahtumassa esiteltiinkiin mielenkiintoisia uusia avoimen datan palveluita, kuten esimerkiksi kaupunginosien paikalliset tiedot ja uutiset yhteen paikkaan keräävä Nearhood ja ympäristöministeriön Envibase-hanke, jossa tuodaan ympäristötietoa avoimeen käyttöön.

Avoimen tiedon hankkeissa tietynlaisena ongelmana on ollut, että avoimen datan yhteiskunnallista vaikutusta on usein vaikea todistaa. Erityisesti kulttuuriaineistoissa tämä on yleinen ongelma, koska helposti mitattavissa olevia taloushyötyjä ei välttämättä ole. Tapahtuman pääpuhujista yhdysvaltalainen Beth Noveck korosti, että uskoon perustuvien argumenttien sijaan avoimen datan kentällä pitäisi alkaa löytää todisteita avoimen tiedon yhteiskunnallisista ja taloudellisista vaikutuksista. Noveck esitteli Iso-Britannian ja Yhdysvaltojen hankkeita, joissa ollaan monella tavalla pidemmällä kuin Suomessa. Ehkä näistä esimerkeistä voisi löytyä Suomessakin sovellettavia ideoita.

Henkilötiedot puhuttivat myös messuilla. MyData-paneelissa pohdittiin yksilön mahdollisuuksia ja rajoitteita hyödyntää omia henkilötietojaan. Open Knowledge Finland on laatinut aiheesta myös raportin. Henkilötiedot ovat mielenkiintoinen ja erilaisia mielipiteitä herättävä aihe. Toisaalta yleinen mielipide on vahvasti sen kannalla, että kansalaisilla tulisi olla oikeus hallita itsestään kerättyä tietoa. Toisaalta esimerkiksi Wikimedia Foundation on kritisoinut EU:n “right to be forgotten” -säädöksiä vahvasti, koska ne voivat johtaa lähdeaineistoja vääristävään sensuuriin.
Wikimedia Suomi kiittää Samsungia, joka avuliaasti lainasi messukäyttöön tietotekniikkaa.

by Sampo Viiri at September 25, 2014 01:00 PM

Building an Open Finland

Open Finland 2014. Image: Kimmo Virtanen. CC-BY.

Open Finland 2014. Image: Kimmo Virtanen. CC-BY.

During 15-16 September Finnish open knowledge and open data practitioners gathered in Helsinki at the Open Finland 2014 event. Wikimedia Finland participated with a joint exhibition stand together with the Finnish OpenGLAM network. We presented the various Wikimedia projects from different standpoints. The GLAM activities were also showcased with the Open Cultural data course’s recently published online contents. Wikimedia participated also at the Finnish eLearning Centre’s exhibition stand.

The main purpose of the Open Finland event was to showcase different open data projects and to encourage civil servants to open up their contents. Open knowledge is clearly valued by the Finnish government, demonstrated by the fact that the event was organised by the Prime Minister’s Office. PM Alexander Stubb was also present and gave the opening speech at the event.

What can Wikimedia offer to public sector organisations? Wikimedia does open knowledge on a practical level. Wikimedia projects Wikipedia and the media file repository Commons are already well-known international and multilingual platforms. With these platforms cultural heritage organisations and government offices can open up and link their own data. Wikimedia is non-profit and its pages are ad-free. This autumn Wikimedia Finland organises Wikipedia education together with Finnish cultural heritage institutions.

Wikidata is a new way to open machine-readable structured data for free use. Wikidata is becoming a comprehensive linked database that includes data used by Wikipedia and other Wikimedia projects. For civil servants and researchers it would be useful to use Wikidata as a reference tool. It will be utilised for example in the British ContentMine project that uses machines to mine and liberate facts from scientific literature. This autumn Wikimedia Finland will organise a Wikidata workshop. If you are interested, please sign up here!

Historical maps are an excellent example how governmental and cultural heritage institutions can partner with non-profit organisations. Wikimaps is an initiative by Wikimedia Finland to gather old maps in Wikimedia Commons, place them in world coordinates with the help of Wikimedia volunteers and start using them in different ways. The project brings together and further develops tools for the discovery of old maps and information about places through history. At the Open Finland event Wikimedia was not the only participating organisation that is dealing with old maps. For example Helsinki Region Infoshare and the National Land Survey of Finland have a wealth of historical maps and other geospatial open data, and some of them have already been published online free of charge.

Wikimedia Finland exhibition stand. Image: Kimmo Virtanen. CC-BY.

Wikimedia Finland exhibition stand. Image: Kimmo Virtanen. CC-BY.

At the event there was a clear desire that digitalisation and opening up government data would lead to new kind of entrepreneurship and thus to economic growth. Indeed there were interesting product launches, such as Nearhood which brings together news and other information related to a specific neighbourhood, or the environmental data project Envibase by the Ministry of the Environment.

Demonstrating the societal value of open data has been somewhat difficult. This is especially common in cultural heritage projects where in many cases there are no tangible financial benefits. Beth Noveck, one of the event’s keynote speakers, emphasised the need to search for evidence about the societal and financial value of open data. So far the arguments supporting open data have been too heavily based on faith, not evidence. Noveck displayed many projects in the UK and in the United States. Perhaps these examples could offer good ideas to circulate in Finland too.

Personal data was one of the key topics during the event. The MyData panelists pondered about the citizens’ possibilities and limitations to use data about themselves. Open Knowledge Finland has also published a report about the topic. Personal data is an interesting topic that raises differing opinions. On the one hand the public opinion is clearly in favour of individuals’ right to control data about themselves. On the other hand for example the Wikimedia Foundation has clearly criticised the recent “right to be forgotten” European Union legislation because it can lead to censorship that distorts online source material.

Wikimedia Finland would like to thank Samsung for lending us IT equipment for exhibition use.

by Sampo Viiri at September 25, 2014 01:00 PM

September 16, 2014

Henri Bergius

Flowhub Kickstarter delivery

It is now a year since our NoFlo Development Environment Kickstarter got funded. Since then our team together with several open source contributors has been busy building the best possible user interface for Flow-Based Programming.

When we set out on this crazy adventure, we still mostly had only NoFlo and JavaScript in mind. But there is nothing inherently language-specific in FBP or our UI, and so when people started making other runtimes compatible with the protocol we embraced the idea of full-stack flow-based programming.

Here is how the runtime registration screen looks with the latest release:

Flowhub Runtime Registration

This hopefully highlights a bit of the possibilities of what can be done with Flowhub right now. I know there are several other runtimes that are not yet listed there. We should have something interesting to announce in that space soon!

Live mode

The Flowhub release made today includes several interesting features apart from giving private repository access to our Kickstarter backers. One I'm especially happy about is what we call live mode.

The live mode, initially built by Lionel Landwerlin, enables Flowhub to discover and connect to running pieces of Flow-Based software running in different environments. With it you can monitor, debug, and modify applications without having to restart them!

We made a short demo video of this in action with Flowhub, Raspberry Pi and an NFC tag.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>

Getting started

Our backers should receive an email today with instructions on how to activate their Flowhub plans. For those who missed the Kickstarter, there should be another batch of Flowhub pre-orders available soon.

Just like with Travis and GitHub, Flowhub is free for open source development. So, everybody should be able to start using it immediately even without a plan.

If you have any questions about Flow-Based Programming or how to use Flowhub, please check out the various ways to get in touch on the NoFlo support page.

Kickstarter Backer badge

by Henri Bergius ( at September 16, 2014 12:00 AM

September 14, 2014

Viikon VALO

4x38 Unsplash - Viikon VALO #194

Unsplash on kokoelma vapaasti käytettäviä korkeatarkkuuksisia ja taiteellisia valokuvia.
valo194-unsplash.png Unsplash tarjoaa täysin vapaaseen käyttöön kokoelman valikoitujen ammattilaisten ottamia kauniita valokuvia hyvällä tarkkuudella. Kaikki Unsplashin kuvat on lisensoitu Creative Commonsin täysin vapaalla CC0-lisenssillä, joka antaa täysin vapaat kädet teosten hyödyntämiseen. Kuvia voi selata suoraan sivuston etusivulta tai pienempinä esikatselukuvina arkistosta. Kaikki kuvat ovat erittäin kauniita ja tunnelmallisia sekä riittävän korkealla resoluutiolla moneen hyödylliseen käyttöön. Kuvat ovat selvästi taitavien ja ammattitaitoisten kuvaajien ottamia.

Unsplash esittelee palvelusta saatuja kuvia käyttämällä tehtyjä teoksia Made with Unsplash -osiossaan.

Palvelun takana on Crew, joka tekee liiketoimintaa toimimalla linkkinä graafista toteutusta tarvitsevien tahojen sekä ammattitaitoisten osaajien välillä.

Toimii seuraavilla alustoilla

Teksti: Pesasa
Kuvakaappaukset: Pesasa
Kuvat: Unsplash

by pesasa at September 14, 2014 02:57 PM

4x37 Graphviz - Viikon VALO #193

Graphviz on automatisoitu työkalu graafien piirtämiseen.
valo193-graphviz.png Graphviz piirtää huolellisesti sijoiteltuja graafeja sille annetusta tekstimuotoisesta tiedosta. Piirrettävän graafin tiedot kirjoitetaan tekstitiedostoon, joka kuvaa graafissa esiintyvät solmut ja niiden väliset kaaret. Solmuille määritellään niiden sisältöteksti sekä ulkoasu, eli väri, muoto, tyyppi sekä muita ominaisuuksia, ja kerrotaan, mitkä solmut on yhdistetty toisiinsa kaarilla. Kaarille voidaan niin ikään määritellä erilaisia ominaisuuksia, kuten väri, viivan tyyppi sekä tekstileima. Graphviz muodostaa saamastaan tiedosta graafin, jonka se sijoittelee parhaaksi näkemällään tavalla tasoon. Graphviz pyrkii minimoimaan kaarien päällekkäisyyksiä ja pitämään tuotetun kuvan muutenkin mahdollisimman selkeänä. Valittavissa on muutama eri tavoin optimoitu algoritmi, joilla tuotettujen kuvien solmut sijoitellaan hieman toisistaan poikkeavilla tavoilla.

Ohjelma osaa tuottaa kuvia muutamassa erilaisessa tiedostomuodossa, kuten: PS, PDF, SVG, FIG, PNG ja GIF. Näistä muodoista kuvia voidaan toki muuntaa vielä useampiin muotoihin ja vektorimuotoista SVG-kuvaa voidaan tietenkin myös jatkomuokata esimerkiksi Inkscapella. Ohjelma on erityisen käyttökelpoinen tieteellisten tulosten havainnollistamisessa, kun esitettävä graafimuotoinen materiaali on esimerkiksi jonkin ohjelman automaattisesti tuottamaa dataa.

Ohjelman syötteenään käyttämä DOT-tiedostomuoto on varsin selkeä ja ymmärrettävä, joskin solmuille ja kaarille voi valita niin monenlaisia ominaisuuksia, että niihin kannattaa tutustua sopivan dokumentaation ja esimerkkien kautta. DOT-tiedostossa kerrotaan ensimmäiseksi, onko kyseessä suunnattu vai suuntaamaton graafi (digraph tai graph) tämän jälkeen luetellaan aaltosulkeiden sisällä solmut ja kaaret, niiden ominaisuudet sekä koko graafia koskevat ominaisuustiedot. Esimerkiksi:

    digraph G {
        A -> B [color=red];
        A -> C [style=dotted];
        B -> D;
        C -> D [style=dashed, dir=both];
        C [style=filled];
        D [shape=box];
Yllä oleva esimerkki määrittelee suunnatun graafin, joka koostuu neljästä solmusta, A, B, C ja D, joiden välille on määritelty kaaria "->" operaattorilla. Kaarien ja solmujen ominaisuuksille on määritelty oletusarvoista poikkeavia arvoja hakasulkujen väliin. Solmua ei tarvitse erikseen luetella, jos se esiintyy jonkin kaaren päätepisteenä eikä sille haluta antaa oletusarvoista poikkeavia ominaisuuksia. Graafit kasvavat oletuksena ylhäältä alas päin, mutta tässä esimerkissä on kasvusuunnaksi määritelty vasemmalta oikealle, eli LR.

Tekstimuodossa määritelty graafi käännetään halutun tyyppiseksi kuvaksi jollain tarjolla olevista komentoriviohjelmista: dot, neato, twopi, circo, fdp, sfdp ja patchwork. Nämä ohjelmat soveltavat solmujen ja kaarien sijoitteluun eri algoritmeja ja tuottavat siksi erinäköiset kuvat. Ohjelmat on optimoitu seuraaviin käyttöihin:
  • dot - Suunnatut graafit, erityisesti puumaiset syklittömät graafit
  • neato - Suuntaamattomat graafit
  • twopi - Graafit säteittäisellä asettelulla, jossa yksi solmu on keskipiste ja muut etäisyyden mukaan kehinä sen ympärillä
  • circo - Ympyrän muotoinen asettelu
  • fdp - Suuntaamattomat graafit
  • sfdp - Suuntaamattomat graafit
  • patchwork - Klustereista koostuvien graafien esittämiseen puukarttoina.

Graafin asettelun laskemiseen sovelletaan graafiteorian tuloksia ja algoritmeja.

Edellä esitetty DOT-tiedosto käännetään kuvatiedostoksi esimerkiksi seuraavasti:

   dot -Tpng -o graphviz.png
Tuotettu lopputulos näyttää tältä: graphviz.png

Minimutkaisemmat graafit voivat muodostua myös aligraafeista. Aligraafeja voidaan käyttää ryhmittelemään graafin solmuja joko niiden asetusten määrittämiseen yhdellä kertaa samanlaisiksi taikka tai niiden sijoitteluun yhtenä ryppäänä.

Graphvizin ymmärtämän DOT-tiedoston luominen ohjelmallisesti on melko yksinkertaista. Lisäksi useimpiin ohjelmointikieliin löytyvät suoraan kytkennät Graphvizin käyttämiseen kirjastona. Monet ohjelmat hyödyntävätkin Graphvizia graafien luomiseen sen sijaan, että yrittäisivät itse laskea sopivaa sijoittelua graafin solmuille. Eräs tällainen ohjelma on debtree, joka tulostaa DOT-tiedoston pyydetyn Debian- ja Ubuntu-jakeluissa käytettävän deb-paketin riippuvuuksista.

Eclipse Public License (EPL)
Toimii seuraavilla alustoilla
Linux, Solaris, Windows, Mac OS X, FreeBSD, OpenBSD, NetBSD
Linux-jakeluihin Graphviz löytyy suoraan paketinhallinnasta. Muille alustoille se on ladattavissa ohjelmiston kotisivulta.
Ohjelman dokumentaatiota
Graafien, solmujen ja kaarien ominaisuuksia
GraphViz for discrete math students

Teksti: Pesasa
Kuvakaappaukset: Pesasa

by pesasa at September 14, 2014 02:05 PM

September 07, 2014

Viikon VALO

4x36 Internet Archive Book Images - Viikon VALO #192

Internet Archive Book Images on laaja kokoelma vanhoista kirjoista skannattuja public domain -kuvia.
valo192-internet_archive_book_images.png Internet Archive on amerikkalainen voittoa tavoittelematon järjestö, joka pyrkii arkistoimaan kirjaston tavoin Internetin sisältöä tutkijoille ja tuleville polville. Järjestön tunnetuimpia palveluita on WayBack Machine, jolla voi etsiä ja selata Internet-sivustojen vanhoja arkistoituja sisältöjä. Internet-sivustojen lisäksi järjestö arkistoi myös painettua aineistoa digitaaliseen muotoon skannaamalla tekijänoikeuksista vapaita kirjoja sähköiseen muotoon. Kirjoja on yli viideltä vuosisadalta. Internet Archive Book Images on järjestön Flickr-palveluun koostama kokoelma näistä sähköiseen muotoon saatetuista kirjoista löydettyjä kuvia. Yli kahdesta miljoonasta skannatusta kirjasta kuvia on saatu noin 14 miljoonaa, joista noin 2,6 miljoonaa on jo saatettu julkiseen jakeluun.

Koska skannatuille kirjoille on digitoinnin yhteydessä tehty myöskin tekstintunnistus, on kunkin kuvan mukaan voitu liittää metatiedoksi 500 sanaa, jotka esiintyvät kirjassa kuvan edellä ja sen jälkeen. Näin kullekin kuvalle on saatu konteksti, jonka perusteella niitä voidaan hakea. Flickr-palvelun hakutoiminnossa voi sanahaun rajoittaa kyseisen tilin jakamiin kuviin, jolloin on helppoa etsiä vaikka kissoihin liittyviä kuvia. Kontekstitekstin laajuudesta johtuen hakuihin saattaa tulla jonkin verran myös ylimääräisiä osumia. Esimerkiksi kissahaun yhteydessä löytyi myös joukko muiden eläimien kuvia. Samoin tästä syystä kuvien kuvatekstit eivät useinkaan kerro suoraan, mitä kuva sisältää vaan tiedon joutuu etsimään itse asiayhteydestä. Jokaisen kuvan Flickr-sivu sisältää lisäksi tiedot kyseisestä kirjasta sekä linkin sen sähköiseen versioon Internet Archivessa.

Koska kuvat on skannattu kirjoista, jotka ovat public domainia joko ikänsä tai lähteensä vuoksi, ovat kuvat vapaasti käytettävissä. Niiden käyttöehdoiksi on Flickrssä merkitty "No known copyright restrictions".

Kotisivu (Tilin etusivu) (Kuvavirta)
Public Domain
Toimii seuraavilla alustoilla
Muuta tietoa
Internet Archiven blogi-kirjoitus
BBC:n uutinen aiheesta Internet Archiven esittäytyminen Flickrssä]
Muita vastaavia Viikon VALOja
Ylen arkistokuvat
Mechanical Curator collection
Flickr: Creative Commons
Wikimedia Commons

Teksti: Pesasa
Kuvakaappaukset: Pesasa
Kuvat: Internet Archive

by pesasa at September 07, 2014 09:44 AM

August 22, 2014

Wikimedia Suomi

Lontoossa koulutettiin Wikipedia-kouluttajia


Wikipedia-kouluttajien koulutuksessa saimme uusia eväitä niin mielelle kuin ruumiillekin

Terveisiä Lontoosta, jossa vietin viikon erilaisissa Wikipediaan liittyvissä tapahtumissa. Viikossa ehti tapahtua niin paljon, että asioiden tiivistäminen yhteen blogikirjoitukseen on aika mahdotonta – siksi kirjoituksia tuleekin monta, niin minulta kuin muiltakin Lontoon kävijöiltä :). Urjanhai aloittikin jo Wikimaniaan liittyvien linkkien keräämisen kahvihuoneeseen.

Viikkoni alkoi Wikipedia-kouluttajien koulutuksella, jota odotin kaikista tapahtumista ehkä eniten, sillä muiden Wikipedia-kouluttajien tapaaminen ja Wikipedia-koulutuksista keskusteleminen on minulle harvinaista herkkua. Odotukseni koulutuksesta olivat hieman väärät: kuvittelin, että keskitymme enemmän siihen, miten juuri Wikipediaa koulutetaan ja mitä haasteita siihen liittyy. Sen sijaan puhuimmekin paljon erilaisista oppimistyyleistä ja siitä, miten nämä tyylit pitää huomioida koulutuksen suunnittelussa. Testasimme myös omat oppimistyylimme ja opimme itsestämme asioita, joita emme olleet aiemmin tajunneet. Kerrassaan hyödyllisiä juttuja! Kouluttamisen kouluttaminen on tärkeää noissa Wikimedia UK:n järjestämissä tilaisuuksissa myös siksi, että niihin osallistuu yleensä paljon wikipedistejä, joilla ei ole ennestään kouluttajakokemusta.

Näitä uusia kouluttamisoppeja saimme sitten viedä käytäntöön, kun suunnittelimme pienissä ryhmissä Wikipediaan liittyviä koulutuksia. Ryhmät jaettiin fiksusti siten, että niissä oli eri oppimistyylien edustajia.


Kouluttajamme Candy uskoi suklaan voimaan

Toisena päivänä saimme siis nauttia neljä hauskaa puolen tunnin Wikipedia-aiheista koulutusta:

  1. Miksi perustaa käyttäjätunnus Wikipediaan ja mitä käyttäjien omille sivuille pitäisi laittaa
  2. Miten Wikipediaan kirjoitetaan lastenlaulujen nuotteja
  3. Miten valita kuvat ja muut elävöittävät elementit erilaisiin Wikipedia-artikkeliin
  4. Miten yritykset voivat päivittää Wikipediaa

Viimeinen aihe oli minun ehdotukseni, esityksen tein hongkongilaisen Samuelin ja italialaisen Ginevran kanssa ja kohdeyleisöksi valitsimme yritysten viestintäihmiset. Kerroimme ensin siitä, mikä motivoi Wikipedia-yhteisöä päivityksiin ja siitä, miksi yritystenkin pitäisi asiasta kiinnostua. Sitten laitoimme heidät miettimään kriteereitä merkittäville ja ei-merkittäville yrityksille . Kerroimme , että yritysten on tärkeää paitsi ymmärtää Wikipedia-muokkaamisen säännöt myös syytä olla hyvin läpinäkyvä ja avoin toiminnassaan. Lopuksi osallistujat saivat etsiä korjattavaa Apple-aiheisesta Wikipedia-artikkelista, jota olimme vähän tuunanneet.

Muiden esityksistä suosikkini oli nuottien kirjoittaminen, sillä se esitettiin niin mukaansatempaavalla tavalla, että haluaisin nyt itsekin pitää vastaavan koulutuksen jollekin porukalle. Ryhmien esityksistä sain muutenkin niitä kaipaamiani vinkkejä Wikipedia-koulutuksiin.

Toinen asia, jota kurssilta odotin, oli kansainvälinen Wikipedia-kouluttajien verkosto. Sitä olemme nyt kasaamassa salaiseen Facebook-ryhmään ja alku vaikuttaa lupaavalta: pienellä tutulla porukalla on helppo vaihtaa Wikimedia-kuulumisia ja -vinkkejä.

Ja nyt minulla on teillekin jotain Lontoon tuomisia: tässä kurssikaverini Zikon haastattelu siitä, miksi ne käyttäjäsivut ovat tärkeitä:

<iframe class="youtube-player" frameborder="0" height="382" src=";rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="625"></iframe>

ja haastattelun kaveriksi ohjeet siitä, miten Wikipediaan tehdään käyttäjätunnus:

<object data=";doc=wikipediatililuontiohje-140821032152-phpapp02" height="512" type="application/x-shockwave-flash" width="625" wmode="opaque"><param name="movie" value=";doc=wikipediatililuontiohje-140821032152-phpapp02"/><param name="allowFullScreen" value="true"/></object>

Kumpikin juttu on tehty avoimella Creative Commons -lisenssillä, joten jään mielenkiinnolla odottamaan innostuuko joku remixaamaan. :)

by Johanna Janhonen at August 22, 2014 05:23 AM

August 21, 2014

Niklas Laxström

Midsummer cleanup: YAML and file formats, HHVM, translation memory

Wikimania 2014 is now over and that is a good excuse to write updates about the MediaWiki Translate extension and
I’ll start with an update related to our YAML format support, which has always been a bit shaky. Translate supports different libraries (we call them drivers) to parse and generate YAML files. Over time the Translate extension has supported four different drivers:

  • spyc uses spyc, a pure PHP library bundled with the Translate extension,
  • syck uses libsyck which is a C library (hard to find any details) which we call by shelling out to Perl,
  • syck-pecl uses libsyck via a PHP extension,
  • phpyaml uses the libyaml C library via a PHP extension.

The latest change is that I dropped syck-pecl because it does not seem to compile with PHP 5.5 anymore; and I added phpyaml. We tried to use sypc a bit but the output it produced for localisation files was not compatible with Ruby projects: after complaints, I had to find an alternative solution.

Joel Sahleen let me know of phpyaml, which I somehow did not found before: thanks to him we now use the same libyaml library that Ruby projects use, so we should be fully compatible. It is also the fastest driver of the four. Anyone generating YAML files with Translate is highly recommended to use the phpyaml driver. I have not checked how phpyaml works with HHVM but I was told that HHVM ships with a built-in yaml extension.

Speaking of HHVM, the long standing bug which causes HHVM to stop processing requests is still unsolved, but I was able to contribute some information upstream. In further testing we also discovered that emails sent via the MediaWiki JobQueue were not delivered, so there is some issue in command line mode. I have not yet had time to investigate this, so HHVM is currently disabled for web requests and command line.

I have a couple of refactoring projects for Translate going on. The first is about simplifying the StringMangler interface. This has no user visible changes, but the end goal is to make the code more testable and reduce coupling. For example the file format handler classes only need to know their own keys, not how those are converted to MediaWiki titles. The other refactoring I have just started is to split the current MessageCollection. Currently it manages a set of messages, handles message data loading and filters the collection. This might also bring performance improvements: we can be more intelligent and only load data we need.

Théo Mancheron competes in the men's decathlon pole vault final

Aiming high: creating a translation memory that works for Wikipedia; even though a long way from here (photo Marie-Lan Nguyen, CC BY 3.0)

Finally, at Wikimania I had a chance to talk about the future of our translation memory with Nik Everett and David Chan. In the short term, Nik is working on implementing in ElasticSearch an algorithm to sort all search results by edit distance. This should bring translation memory performance on par with the old Solr implementation. After that is done, we can finally retire Solr at Wikimedia Foundation, which is much wanted especially as there are signs that Solr is having problems.

Together with David, I laid out some plans on how to go beyond simply comparing entire paragraphs by edit distance. One of his suggestions is to try doing edit distance over words instead of characters. When dealing with the 300 or so languages of Wikimedia, what is a word is less obvious than what is a character (even that is quite complicated), but I am planning to do some research in this area keeping the needs of the content translation extension in mind.

by Niklas Laxström at August 21, 2014 04:24 PM

August 20, 2014

Wikimedia Suomi

Tule tuomaan kulttuuritietoa Wikipediaan

Kulttuuritietoa tuotiin Wikipediaan Kiasman Wikimaratonissa 2013. Kuva: Kansallisgalleria / Petri Virtanen CC BY-SA 3.0

Kulttuuritietoa tuotiin Wikipediaan Kiasman Wikimaratonissa 2013.
Kuva: Kansallisgalleria / Petri Virtanen CC BY-SA 3.0

Kiinnostaako kulttuuri? Haluatko oppia kirjoittamaan Wikipediaan? Tämä on tarkoitettu sinulle.

Wikimedia Suomi, Helsingin seudun kesäyliopisto ja kuusi kulttuurilaitosta järjestävät loka-marraskuussa kurssin, jossa opitaan kirjoittamaan Wikipediaan ja tutustutaan kulttuurilaitoksiin pintaa syvemmältä. Ilmoittautuminen on käynnissä nyt.

Tuo kulttuuri Wikipediaan -kurssilla vieraillaan kuudessa GLAM-organisaatiossa eli kirjastossa, arkistossa ja museossa Helsingissä. Kaikissa niissä päästään kurkistamaan kulissien taakse ja tutustumaan järjestäjätahon erikoisalaan, asiantuntijoihin ja aineistoihin. Kaikkiin tapaamisiin sisältyy Wikipedia-kirjoittamista ja opetustuokio Wikipediasta. Oppaina toimivat kokeneet wikipedistit.

Tapahtumapaikkoina ovat Helsingin kaupunginkirjasto, Helsingin taidemuseo, Kansallisgallerian kokoelmat ja Ateneumin taidemuseo, Suomen valokuvataiteen museo, Svenska litteratursällskapet i Finland sekä Yle Arkisto.

Kurssi on osa Wikimedia Suomen GLAM-toimintaa, jossa rakennetaan yhteistyötä muisti- ja kulttuuriorganisaatioiden ja wikipedistien välille. Tavoitteena on kannustaa organisaatiota eteenpäin avoimen tiedon tiellä sekä tietenkin lisätä Wikipediaan laadukasta tietoa kulttuurista. Samalla tehdään tunnetuksi Wikipediaa ja sen sisarprojekteja ja tuodaan niiden pariin uusia muokkaajia.

Kurssin suunnittelussa on hyödynnetty kokemuksia aiemmista GLAM-projekteista, joita Wikimedia Suomi on ollut toteuttamassa. Nykytaiteen museo Kiasmassa järjestettiin viime vuonna ennätyksellinen yli vuorokauden mittainen Wikimaraton. Editointitapahtumia on pidetty myös Ateneumissa ja Mediamuseo Rupriikissa. Wikipedia-kurssi on uusi yhteistyömuoto, jonka toimivuutta pääsemme testaamaan syksyllä.

Suosittelemme osallistumista koko kurssille, sillä Wikipedia-oppi karttuu kurssin mittaan ja tapaamiset tarjoavat uusia näkökulmia niin Wikipediaan kuin kulttuurisisältöihinkin. Kurssi on maksuton ja paikkoja on rajoitettu määrä, joten kannattaa ilmoittautua ajoissa. Tervetuloa mukaan!

Sanna Hirvonen

Kirjoittaja on Wikimedia Suomen GLAM-vastaava ja työskentelee museolehtorina Kiasmassa.

Kurssiohjelma ja ilmoittautuminen

Wikipediaa muokataan yhdessä myös Tampereella 6.9. Tule mukaan Mediamuseo Rupriikin wikipajaan.


by Sanna Hirvonen at August 20, 2014 10:22 AM

August 13, 2014

Riku Voipio

Booting Linaro ARMv8 OE images with Qemu

A quick update - Linaro ARMv8 OpenEmbbeded images work just fine with qemu 2.1 as well:

$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt \
-kernel Image -append 'root=/dev/vda2 rw rootwait mem=1024M console=ttyAMA0,38400n8' \
-drive if=none,id=image,file=vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.16.0-1-linaro-vexpress64 (buildslave@x86-64-07) (gcc version 4.8.3 20140401 (prerelease) (crosstool-NG linaro-1.13.1-4.8-2014.04 - Linaro GCC 4.8-2014.04) ) #1ubuntu1~ci+140726114341 SMP PREEMPT Sat Jul 26 11:44:27 UTC 20
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
Quick benchmarking with age-old ByteMark nbench:
Index Qemu Foundation Host
Memory 4.294 0.712 44.534
Integer 6.270 0.686 41.983
Float 1.463 1.065 59.528
Baseline (LINUX) : AMD K6/233*
Qemu is upto 8x faster than Foundation model on Integers, but only 50% faster on Math. Meanwhile, the Host pc spends 7-40x slower emulating ARMv8 than executing native instructions.

by Riku Voipio ( at August 13, 2014 02:36 PM

August 05, 2014

Riku Voipio

Testing qemu 2.1 arm64 support

Qemu 2.1 was just released a few days ago, and is now a available on Debian/unstable. Trying out an (virtual) arm64 machine is now just a few steps away for unstable users:

$ sudo apt-get install qemu-system-arm
$ wget
$ wget
$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt -kernel trusty-server-cloudimg-arm64-vmlinuz-generic \
-append 'root=/dev/vda1 rw rootwait mem=1024M console=ttyAMA0,38400n8 init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring' \
-drive if=none,id=image,file=trusty-server-cloudimg-arm64-disk1.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.13.0-32-generic (buildd@beebe) (gcc version 4.8.2 (Ubuntu/Linaro 4.8.2-19ubuntu1) ) #57-Ubuntu SMP Tue Jul 15 03:52:14 UTC 2014 (Ubuntu 3.13.0-32.57-generic
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
ubuntu@ubuntu:~$ cat /proc/cpuinfo
Processor : AArch64 Processor rev 0 (aarch64)
processor : 0
Features : fp asimd evtstrm
CPU implementer : 0x41
CPU architecture: AArch64
CPU variant : 0x1
CPU part : 0xd07
CPU revision : 0

Hardware : linux,dummy-virt
The "init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring" is ubuntu cloud stuff that will set the ubuntu user password to "randomstring" - don't use "randomstring" literally there, if you are connected to internets...

For more detailed writeup of using qemu-system-aarch64, check the excellent writeup from Alex Bennee.

by Riku Voipio ( at August 05, 2014 07:45 PM

July 07, 2014

Niklas Laxström summer update

It’s been a busy while since last update, but how could I have not worked on ;) Here is an update on my current activities.
In this episode:

  • we provide translations for over 70 % of users of the new Wikipedia app,
  • I read a book on networking performance and get needy for speed,
  • ElasticSearch tries to eat all of us and our memory,
  • HHVM finds the place not fancy enough,
  • Finns and Swedes start cooperating.


Naturally, I have been thinking of ways to further improve performance. I have been running HHVM as a beta feature at many months now, but I have kept turning it on and off due to stability issues. It is currently disabled, but my plan is to try the Wikimedia packaged version of HHVM. Those packages only work in Ubuntu 2014.04, so Siebrand and I first have to upgrade the server from Ubuntu 2012.04, as we plan to later this month (July). (Update: done as of 2014-07-09, 14 UTC.)

Map of some translators

A global network of translators is not served well enough from a single location

After reading a book about networking performance I finally decided to give a content distribution network (CDN) a try. Not because they can optimize and cache things on the fly [1], nor because the can do spam protection [2], but because CDN can reduce latency, which is usually the main bottleneck of web browsing. We only have single server in Germany, but our users are international. I am close to the server, so I have much better experience than many of our users. I do not have any numbers yet, but I will do some experiments and gather some numbers to see whether CDN helps us.

[1] MediaWiki is already very aggressive in terms of optimizations for resource delivery.
[2] Restricting account creation already eliminated spam on our wiki.

Wikimedia Mobile Apps

Amir and I have been closely working with the Wikimedia Mobile Apps team to ensure that their apps are well supported. In just a couple weeks, the new app was translated in dozens languages and released, with over 7 millions new installations by non-English users (74 % of the total).

In more detail, we finally addressed a longstanding issue in the Android app which prevented translation of strings containing links. I gave Yuvi access to synchronize translations, ensuring that translators have as much time as possible to translate and the apps have the latest updates before being released. We also discussed about how to notify translators before releases to get more translations in time, and about improvements to their i18n frameworks to bring their flexibility more in line with MediaWiki (including plural support).

To put it bluntly, for some reason the mobile i18n frameworks are ugly and hard to work with. Just as an example, Android did not support many languages at all just for one character too much; support is still partial. I can’t avoid comparing this to the extra effort which has been needed to support old versions of Internet Explorer: we would rather be doing other cool things, but the environment is not going to change anytime soon.


I installed and enabled CirrusSearch on for the first time, we have a real search engine for all our pages! I had multiple issues, including running a bit tight on memory while indexing all content.

Translate’s translation memory support for ElasticSearch has been almost ready for a while now. It may take a couple months before we’re ready to migrate from Solr (first on, then Wikimedia sites). I am looking forward to it: as a system administrator, I do not want to run both Solr and ElasticSearch.

I want to say big thanks to Nik for helping both with the translation memory ElasticSearch backend and my CirrusSearch problems.

Wikimedia Sweden launches a new project

I am expecting to see an increased activity and new features at thanks to a new project by Wikimedia Sweden together with InternetFonden.Se. The project has been announced on the Wikimedia blog, but in short they want to bring more Swedish translators, new projects for translation and possibly open badges to increase translator engagement. They are already looking for feedback, please do share your thoughts.

by Niklas Laxström at July 07, 2014 09:44 AM

May 08, 2014

Riku Voipio

Arm builder updates

Debian has recently received a donation of 8 build machines from Marvell. The new machines come with Quad core MV78460 Armada XP CPU's, DDR3 DIMM slot so we can plug in more memory, and speedy sata ports. They replace the well served Marvell MV78200 based builders - ones that have been building debian armel since 2009. We are planning a more detailed announcement, but I'll provide a quick summary:

The speed increase provided by MV78460 can viewed by comparing build times on selected builds since early april:

Qemu build times.

We can now build Qemu in 2h instead of 16h -8x faster than before! Certainly a substantial improvement, so impressive kit from Marvell! But not all packages gain this amount of speedup:

webkitgtk build times.

This example, webkitgtk, builds barely 3x faster. The explanation is found from debian/rules of webkitgkt:

# Parallel builds are unstable, see #714072 and #722520
# ifneq (,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# NUMJOBS = $(patsubst parallel=%,%,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# endif
The old builders are single-core[1], so the regardless of parallel building, you can easily max out the cpu. New builders will use only 1 of 4 cores without parallel build support in debian/rules.

During this buildd cpu usage graph, we see most time only one CPU is consumed. So for fast package build times.. make sure your packages supports parallel building.

For developers, is porter machine with Armada XP. It has schroot's for both armel and armhf. set "DEB_BUILD_OPTIONS=parallel=4" and off you go.

Finally I'd like to thank Thomas Petazzoni, Maen Suleiman, Hector Oron, Steve McIntyre, Adam Conrad and Jon Ward for making the upgrade happen.

Meanwhile, we have unrelated trouble - a bunch of disks have broken within a few days apart. I take the warranty just run out...

[1] only from Linux's point of view. - mv78200 has actually 2 cores, just not SMP or coherent. You could run an RTOS on the other core while you run Linux on the other.

by Riku Voipio ( at May 08, 2014 07:14 PM

May 07, 2014

Henri Bergius

Flowhub public beta: a better interface for Flow-Based Programming

Today I'm happy to announce the public beta of the Flowhub interface for Flow-Based Programming. This is the latest step in the adventure that started with some UI sketching early last year, went through our successful Kickstarter — and now — thanks to our 1 205 backers, it is available to the public.

Getting Started

This post will go into more detail on how the new Flowhub interface works in a bit, but for those who want to dive straight in, here are the relevant links:

Make sure to read the Getting Started guides and check out the Flowhub FAQ. There is also a new video available:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="" width="640"></iframe>

Both the web version and the Chrome app are built following the offline first philosophy, and keep everything you need stored locally inside your browser. The Chrome app and the upcoming iOS and Android builds will enable us to later introduce capabilities that are not possible inside regular browsers, like talking directly to MicroFlo runtimes over USB or Bluetooth. But other than that they're similar in features and user experience.

New User Interface

If you read the NoFlo Update from last October, you might notice that the new Flowhub user interface looks and feels quite different from it. Main screen of new Flowhub UI

Graph editing in new Flowhub UI

This new design was implemented to improve touch-screen friendliness, as well as to give Flowhub a more focused, unique look. It also allowed us to follow some interesting UX paths that I'll explain next.


One typical problem in visual programming tools is that they can become quite cluttered with information. To solve this, we utilized the concept of Zooming User Interfaces, which allow us to show a clear overview of a program when zoomed out, and reveal all kinds of detail about it when zoomed in.

Zoomed out

Zoomed in

Zooming works with two-finger scroll on typical desktop computers, or with the pinch gesture on touch-enabled devices.

Pie Menu

Another interface concept that we used to make interactions faster and more contextual is Pie Menus.

For example, you can easily navigate to subgraphs and component source code with the menu:

Navigating with the Pie Menu

When you have selected multiple nodes, you can use the menu to group them or move them to a new subgraph:

Group selections with the Pie Menu

The menu can also be used for removing edges or nodes:

Deleting an edge with the Pie Menu

You can activate the pie menu in the graph editor with a right mouse click, or with a long press on touch-enabled devices.

Component Editor

Another new major feature is in-app component editing. If your runtime supports it, you can at any time create or modify custom components for your project and they'll become immediately available for your graphs.

Creating a new component

Component Editing

The programming languages available for component creation depend on the runtime. With NoFlo these are JavaScript and CoffeeScript. With another runtime they might be C, Java, or Python.

Offline First

While some claim that in reality you're never offline, the reality is that there are many situations where Internet connectivity is either not available, unreliable, or simply expensive. Think of a typical conference or a hackathon for instance.

Because of this — and to give software developers the privacy they deserve — Flowhub has been designed to work "offline first". All your graphs, projects, and custom components are stored locally in your browser's Indexed Database and only transmitted over the network when you wish to push to a GitHub project, or interact with a remote runtime.

We're following a very similar UI concept as Amazon Kindle in that you can download projects locally to your device, or browse the ones you have available in the cloud:

Local and remote projects

At any point you can push your changes to a graph or a component to GitHub:

Pushing to GitHub

Runtime discovery happens through a central service, but once you know the address of your FBP runtime, the communications between it and your browser will happen directly. This makes it easy to work with Node.js projects running on your own machine even when offline.

Cross-platform, Full-stack

When we launched the NoFlo UI Kickstarter, we were initially only thinking about how to support NoFlo in different environments. But in the course of development we ended up defining a network protocol for FBP that enabled us to move past just a single FBP environment and towards supporting all of them. This is what prompted the Flowhub rebranding.

Since then, the number of supported FBP environments has been growing. Here is a list of the ones I'm aware of:

I hope that the developers of other FBP environments like JavaFBP and GoFlow add support for the FBP protocol soon so that they can also utilize the Flowhub interface.

Open Source vs. Paid

As promised in our Kickstarter, the NoFlo Development Environment is an open source project available under the MIT license.

Flowhub is a branded and supported instance of that with some additional network services like the Runtime Registry.

NoFlo UI vs. Flowhub

The Flowhub plans allow us to continue development of this Flow-Based Programming toolset, as well as to provide the various network services needed for making the experience smooth.

Just like with GitHub, Flowhub provides a free environment for anybody working on public and open source projects. Private projects need a paid plan.

Kickstarter & Pre-Ordered Plans

It is likely that many readers of this post already supported our Kickstarter or pre-ordered a Flowhub plan. Since Flowhub is still in beta, we haven't activated your plans yet. So for now, everybody is using Flowhub with the free plan.

We will be rolling out the paid plans and Kickstarter rewards towards the end of the beta testing period.

Feel free to already log in and start using Flowhub, however! The plan will be added to your account when we feel the software is ready for it.


Here are some examples of things you can build with Flowhub targeting web browsers:

For a more comprehensive cross-platform project, see my Building an Ingress Table with Flowhub post.

There is also an ongoing Google Summer of Code project to port various Meemoo apps to Flowhub. This will hopefully result in a lot more demos.

Next Steps

The main purpose of this public beta is to allow our backers and other FBP enthusiasts an early access to the Flowhub user interface. Now we will focus on stabilization and bug fixing, aided by the NoFlo UI issue tracker. We're also gathering feedback from beta testers in form of user surveys and will utilize those to prioritize both bug fixing and feature work.

Flowhub team testing the UI

Right now the main areas of focus are:

We hope to release the stable version of Flowhub in summer 2014.

by Henri Bergius ( at May 07, 2014 12:00 AM

May 02, 2014

Henri Bergius

Flowhub and the GNOME Developer Experience

I've spent the last three days in the GNOME Developer Experience hackfest working on the NoFlo runtime for GNOME with Lionel Landwerlin.

GNOME Developer Experience hackfest participants

What the resulting project does is give the ability to build and debug GNOME applications in a visual way with the Flowhub user interface. You can interact with large parts of the GNOME API using either automatically generated components, or hand-built ones. And while your software is running, you can see all the data passing through the connections in the Flowhub UI.

GNOME development in Flowhub

The way this works is the following:

  • You install and run the noflo-gnome runtime
  • The runtime loads all installed NoFlo components and dynamically registers additional ones based on GObject Introspection
  • The runtime pings Flowhub's runtime registry to notify the user that it is available
  • Based on the registry, the runtime becomes available in the UI
  • After this, the UI can start communicating the with runtime. This includes loading and registering components, and creating and running NoFlo graphs
  • The graphs are run inside Gjs

Creating a new NoFlo GNOME project

While there is still quite a bit of work to be done in exposing more of the GNOME API as flow-based components, you can already do quite a bit with this. In addition to building simple UIs with GTK+, working with Clutter animations was especially fun. With NoFlo, every running graph is "live", and so you can easily modify the various parameters and add new functionality while the software is running, and see those changes take effect immediately.

Here is a quick video of building a simple GTK application that loads a Glade user interface definition, runs it as a new desktop window, and does some signal handling:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="" width="480"></iframe>

If you're interested in visual, flow-based programming on the Linux desktop, feel free to try out the noflo-gnome project!

There are still bugs to squish, documentation to write, and more APIs to wrap as components. All help in those is more than welcome.

by Henri Bergius ( at May 02, 2014 12:00 AM

April 09, 2014


Ota Googlen palvelut käyttöön Ubuntussa!

Usein kuulen purnattavan, että kun Ubuntussa ei ole kaikkia ominaisuuksia mihin windowsissa on tottunut. Asia on ihan totta ja se on myönnettävä, mutta asiat ovat viimeaikoina menneet parempaan suuntaan. Annan tässä kirjoituksessa muutamia vinkkejä, joilla saat ubuntu -ympäristöstä vielä enemmän irti chrome -selaimen avulla.
Oikeastaan kaikki googlen palvelut ovat sidoksissa heidän Chrome ja Chromium -selaimiinsa. Itse huomasin tämän noin vuosi sitten, kun päivitin yritykseni ns. “konttorikoneen” ubuntuun. Olin juuri ottanut käyttöön Drive -pilvipalvelun ja olin turhautunut siitä, ettei Google tarjonnut natiivia työpöytäohjelmaa Ubuntulle. Windowsissa olin turhautunut ohjelman hitauteen.

Aikani googleteltuani törmäsin sivustoihin Omg! Ubuntu! sekä Omg! Chrome!  Nämä sivut ovat saman ylläpitäjän sivustoja, mutta tarjoavat päivittäin uusia artikkeleita aiheisiinsa liittyen. Syvennyin artikkeleihin paremmin – ja löysin tavan jolla tuoda googlen ohjelmat paremmin saataville työpöydällä.

Koska olen Drive -käyttäjä, halusin saada sen tarjoamat ohelmat helposti käyttöön. Tämä onnistui helposti, annan ohjeet tässä:

1) Kirjaudu chromiumiin tai chromeen sisään
2) Sovellukset -välilehdellä valitse haluamasi ohelmat klikkaamalla hiiren oikealla kuvakkeiden päällä ja valitse “luo pikakuvake”
3) Tallenna pikakuvake haluamaasi kansioon
4) Navigoi itsesi takaisin valitsemaasi kansioon, valitse kuvakkeet ja raahaa ne ubuntun työpöydän palkkiin

Tadaa! Nyt käytössäsi on googlen toimisto-ohjelmat, pilvi sekä ihan mitä vain haluat asentaa chromen sovelluskaupasta!


Tämän lisäksi tallennat tiedostosi suoraan pilveen, jolloin pääset niihin käsiksi mistä vain, eikä sinun tarvitse huolehtia varmuuskopioista.

Uskoisin tästä olevan hyötyä sellaisille käyttäjille, jotka ovat tähän saakka käyttäneet ubuntu onea (joka tulee katoamaan käytöstä lähiaikoina) ja niille, jotka ovat tuskastuneet tiedostojen ajantasalla pitämiseen. Omalla kohdallani työni on niin kovin liikkuvaa ja tarvitsen rajoittaman pääsyn tiedostoihini, ja tämä tapa on helpottanut käyttöäni huomattavasti.

by presidentti at April 09, 2014 05:45 PM

March 19, 2014


Qt 5.2.1 in Ubuntu

Ubuntu running Qt 5.2.1
Ubuntu running Qt 5.2.1
Qt 5.2.1 landed in Ubuntu 14.04 LTS last Friday, hooray! Making it into a drop-in replacement for Qt 5.0.2 was not trivial. Because of the qreal change, it was decided to rebuild everything against the new Qt, so it was an all at once approach involving roughly 130 source packages while the parts were moving constantly. The landing last week meant pushing to archives around three thousand binary packages - counting all six architectures - with the total size of closer to 10 gigabytes.

The new Qt brings performance and features to base future work on, and is a solid base for the future of Ubuntu. You may be interested in the release notes for Qt 5.2.0 and 5.2.1. The Ubuntu SDK got updated to Qt Creator 3.0.1 + new Ubuntu plugin at the same time, although updates for the older Ubuntu releases is a work in progress by the SDK Team.

How We Got Here

Throughout the last few months before the last joint push, I filed tens of tagged bugs. For most of that time I was interested only in build and unit test results, since even tracking those was quite a task. I offered simple fixes here and there myself, if I found out a fix.

I created automated Launchpad recipe builds for over 80 packages that rely on Qt 5 in Ubuntu. Meanwhile I also kept on updating the Qt packaging for its 20+ source packages and tried to stay on top of Debian's and upstream's changes.

Parallel to this work, some like the Unity 8 and UI Toolkit developers started experimenting with my Qt 5.2 PPA. It turned out the rewritten QML engine in Qt 5.2 - V4 - was not entirely stable when 5.2.0 was released, so they worked together with upstream on fixes. It was only after 5.2.1 release that it could be said that V4 worked well enough for Unity 8. Known issues like these slowed down the start of full-blown testing.

Then everything built, unit tests passed, most integration tests passed and things seemed mostly to work. We had automated autopilot integration testing runs. The apps team tested through all of the app store to find out whether some needed fixes - most were fine without changes. On top of the found autopilot test failures and other app issues, manual testing found a few more bugs

Some critical pieces of software
like Sudoku needed small fixing
Finally last Thursday it was decided to push Qt in, with a belief that the remaining issues had fixes in branches or not blockers. It turned out the real deployment of Qt revealed a couple of more problems, and some new issues were raised to be blockers, and not all of the believed fixes were really fixing the bugs. So it was not a complete success. Considering the complexity of the landing, it was an adequate accomplishment however.

Specific Issues

Throughout this exercise I bumped into more obstacles that I can remember, but those included:
  • Not all of the packages had seen updates for months or for example since last summer, and since I needed to rebuild everything I found out various problems that were not related to Qt 5.2
  • Unrelated changes during 14.04 development broke packages - like one wouldn't immediately think a gtkdoc update would break a package using Qt
  • Syncing packaging with Debian is GOOD, and the fixes from Debian were likewise excellent and needed, but some changes there had effects on our wide-spread Qt 5 usage, like the mkspecs directory move
  • xvfb used to run unit tests needed parameters updated in most packages because of OpenGL changes in Qt
  • arm64 and ppc64el were late to be added to the landing PPA. Fixing those archs up was quite a last minute effort and needed to continue after landing by the porters. On the plus side, with Qt 5.2's V4 working on those archs unlike Qt 5.0's V8 based Qt Declarative, a majority of Unity 8 dependencies are now already available for 64-bit ARM and PowerPC!
  • While Qt was being prepared the 100 other packages kept on changing, and I needed to keep on top of all of it, especially during the final landing phase that lasted for two weeks. During it, there was no total control of "locking" packages into Qt 5.2 transition, so for the 20+ manual uploads I simply needed to keep track of whether something changed in the distribution and accommodate.
One issue related to the last one was that some things needed were in progress at the time. There was no support for automated AP test running using a PPA. There was also no support on building images. If migration to Ubuntu Touch landing process (CI Train, a middle point on the way to CI Airlines) had been completed for all the packages earlier, handling the locking would have been clearer, and the "trunk passes all integration tests too" would have prevented "trunk seemingly got broken" situations I ended up since I was using bzr trunks everywhere.

Qt 5.3?

We are near to having a promoted Ubuntu image for the mobile users using Qt 5.2, if no new issues pop up. Ubuntu 14.04 LTS will be released in a month to the joy of desktop and mobile users alike.

It was discussed during the vUDS that Qt 5.3.x would be likely Qt version for the next cycle, to be on the more conservative side this time. It's not entirely wrong to say we should have migrated to Qt 5.1 in the beginning of this cycle and only consider 5.2. With 5.0 in use with known issues, we almost had to switch to 5.2.

Kubuntu will join the Qt 5 users next cycle, so it's no longer only Ubuntu deciding the version of Qt. Hopefully there can be a joint agreement, but in the worst case Ubuntu will need a separate Qt version packaged.

by Timo Jyrinki ( at March 19, 2014 07:42 AM

March 18, 2014

Henri Bergius

Building an Ingress Table with Flowhub

The c-base space station — a culture carbonite and a hackerspace — is the focal point of Berlin's thriving tech scene. It is also the place where many of the city's Ingress agents converge after an evening of hectic raiding or farming.

An Ingress event at c-base

In February we came with an idea on combining our dual passions of open source software and Ingress in a new way. Jon Nordby from Bitraf hackerspace in Oslo had recently shown off the new full-stack development capabilities of Flowhub made possible by integrating my NoFlo flow-based programming framework for JavaScript and his MicroFlo giving similar abilities to microcontroller programming. So why not use them to build something awesome?

Since Flowhub is nearing a public beta, this would also give us a way to showcase some of the possibilities, as well as stress-test Flow-Based Programming in a Internet-connected hardware project. Often hackerspace projects tend to stretch from months to infinity; our experiences with NoFlo and flying drones already showed that with FBP we can easily parallelize development, challenging some of the central dogmas of the Mythical Man Month. It was worth a try to see if this would allow us to compress the time needed for such a project from a couple of months to a long weekend.

Introducing the Ingress Table

Before the actual hackathon we had two meetings with the project team. There were many decisions to be made, starting from the size and shape of the table to the features it should have. Looking at the different tables in the c-base main hall we settled on a square table of slightly less than 1m2, as that would fit nicely in the area, and still seat the magical number of eight Ingress agents or other c-base regulars.

The tabletop would be a map of c-base and the surrounding area, and it would show the status of the portals nearby, as well as alert people sitting at it of attacks and other Ingress events of interest. Essentially, it'd be a physical world equivalent of the Intel Map.

Intel Map of the area

We considered integrating a regular screen to have maximum flexibility in the face of the changing world of Ingress, but eventually decided that most people at c-base already spend much of their waking hours looking at a screen, and so we'd do something more ambient and just use a set of physical lights.

Exploded viewAssembled view

The hardware and software also needed some thought, especially since some of the parts needed might have long shipping times. Eventually we settled on the combination of a BeagleBone Black ARM computer as the brains of the system, and a LaunchPad Tiva as the microcontroller running the hardware. The computer would run NoFlo on Linux, and we'd flash the microcontroller with MicroFlo.

Our BeagleBone Black

By the time of arriving to c-base, many Ingress agents have their phones and battery packs depleted, and so we incorporated eight USB power ports into the table design. Simply plug in your own cable and you can charge your device while enjoying the beer and the chat.

Once the plans had been set, a flurry of preparations began. We would need lots of things, ranging from wood and glass parts for the table shell, to various different electronics and computer parts for the insides. And some of these would have to be ordered from China. Would they arrive in time?

Design render of the table

I spent the two weeks before the hackathon doing a project in Florence, and it was quite interesting to coordinate the logistics remotely. Thankfully our Berlin team did a stellar job of tracking missing shipments and collecting the things we needed!

The hackathon

I landed in Berlin in the early evening of Friday, March 14th. After negotiating the rush hour public transport of the Tegel airport, I arrived to the space station to see most of our team already there, unpacking and getting the supplies ready for the hackathon.

Buying the materials

At this point we essentially had only the raw materials available. Planks of wood, plates of glass and plastic. And a lot of electronics components. No assembly had yet been done, and no lines of code had been written or graphs drawn for the project.

We quickly organized the hackathon into three tracks: hardware, software, and electronics. The hardware team got themselves busy building the table shell, as that would need to be finished early so that the paint would have time to dry before we'd start assembling the electronics into it. Over the next day they'd often call the other teams over to help in holding or moving things, and also for the very important task of test-sitting the table to figure out the optimal trade-off between table height and legroom.

Legroom measurementsLegroom measurements

While the hardware guys were working, we started designing the software part of it. Some basic decisions had to be taken on how we'd get the data, and how we would filter and transform the raw portal statuses to commands to the actual lights in the table.

Eventually we settled on a NoFlo graph that would poll the portal data in, and run it through a set of transformations to find the detect the data points of interest, like portals that have changed owners or are under attack. In parallel we would run some animation loops to create a more organic, shifting feel to the whole map by having the light shining through the streets be constantly shifting and moving.

The main Ingress Table NoFlo graph

(and yes, the graph you see above is the actuall running code of the table)

Software team at robolabSoftware team at robolab

Since the electronics wouldn't be working for a while still, we decided to build also a Ingress Table Emulator in HTML and NoFlo. This would give us something to test the data and our graphs while the other teams where still working on their things. This proved to be a very useful thing, as this way we were able to watch a big Ingress battle through our simulated blinking lights already in the Saturday evening, and see our emulated table go through pretty much all the different states we were interested in.

The software team at workThe software team at work

Once the table shell had been built and the paint was drying, the hardware team started preparing the other things like the map layer, the glass top, and the USB chargers.

Watching the paint dryAttaching the map sticker

For electronics we noticed that we had still some parts missing from the inventory, and so I had to do a quick supply run on Saturday. But once we got those, the team got into calculations and soldering.

Electronics workElectronics work

Every project has its setbacks, and in this case it came in the form of running pre-released software. It turned out that the LaunchPad port of MicroFlo still had some issues, and so most of Sunday was spent debugging the communications protocol and tuning the components. But the end result is a much better improved MicroFlo, and eventually we got the major moment of triumph of seeing the street lights start animating for the first time. LED strips controlled by a LaunchPad Tiva, in turn controlled by animation loops running in a NoFlo graph on Node.js.

Food timeFiguring out communications problems

On Monday evening we convened at c-base for the final push. Street lights were ready, but there were still some issues with getting the table connected wirelessly to the space station network. And we would still need to implement the MicroFlo component for the portal lights. The latter resulting in an epic parallel programming and debugging session between Jon in Norway and Uwe in Berlin. But by the end of the evening we were able to test the full system for the first time, and carry the table to its new home.

Testing the lightsThe table running in the main hall

It was time to celebrate. For an Ingress table, this meant sitting around the table enjoying cold beers, while hacking a level 8 blue portal and watching the lights change across the board as agents ventured out.

Ingress Table in production

(We're still in the process of collecting media about the project. The table will look a lot more awesome in video, and I hope I'll be able to add some of those to this post soon)

Moving ahead

Having the first running version of the table is of course a big milestone. Now we should monitor it for some time (over beer, of course) and make adjustments as necessary. There are some things that obviously need to be changed with the brightness of the lights based on the location of the table in the main hall. And of course we'll only know about the full system's robustness once it has a bit more mileage.

Since we already have a HTML emulator of the table, it might be fun to release that to the public at some point. That way agents who are not at the c-base main hall could also see what is going on with this simple interface.

An interesting area of development is also to see how the table could integrate better with the rest of the space station. There are various screens ranging from the awesome Mate Light to smaller screens and gauges everywhere. And all of that is pretty much networked and available. Maybe we could visualize some events of interest in other parts of the station. This shows of the "Internet of Things" is never finished.

So far Niantic Labs — the makers of Ingress — have limited the availability of a portal data API to few selected parties, and so for now we had to work with a third-party to get the information needed. We hope this table will be another step in convincing Niantic of the creative potential that an official, open Ingress API would unleash.

I'd like to give big thanks especially to everybody who participated in hackathon — whether on location or remotely from Oslo — as well as to those who were cheering us on. I'm also grateful to Flowhub for sponsoring the project. And of course to c-base for being an awesome place where such things can happen.

The full source code for the Ingress Table can be found from

Flowhub - Make code playful

by Henri Bergius ( at March 18, 2014 12:00 AM

March 03, 2014

Niklas Laxström

Numbers on sign-up process features a good user experience for non-technical translators. A crucial or even critical component is signing up. An unrelated data collection for my PhD studies inspired me to get some data on the user registration process. I will present the results below.


At the process of becoming an approved translator has been, arguably, complicated in some periods.

In the early days of the wiki, permissions were not clearly separated: hundreds users were just given the full set of permissions to edit the MediaWiki namespace and translate that way.

Later, we required people to go through hoops of various kind after registering to be approved as translators. They had to create a user page with certain elements and post a request on a separate page and they would not get notifications when they were approved unless they tweaked their preferences.

At some point, we started using the LiquidThreads extension: now the users could get notifications when approved, at least in theory. That brought its own set of issues though: many people thought that the LiquidThreads search box on the requests page was the place where to write the title of their request. After entering a title, they ended up in a search results page, which was a dead end. This usability issue was so annoying and common that I completely removed the search field from LiquidThreads.
In early 2010 we implemented a special page wizard (FirstSteps) to guide users though the process. For years, this has allowed new users to get approved, and start translating, in few clicks and a handful hours after registering.

In late 2013 we enabled the new main page containing a sign-up form. Using that form, translators can create an account in a sandbox environment. Accounts created this way are normal user accounts except that they can only make example translations to get a feel of the system. Example translations give site administrators some hints on whether to approve or reject the request and approve the user as a translator.

Data collection

The data we have is not ideal.

  • For example, it is impossible to say what’s our conversion rate from users visiting the main page to actual translators.
  • A lot of noise is added by spam bots which create user accounts, even though we have a CAPTCHA.
  • When we go far back in the history, the data gets unreliable or completely missing.
    • We only have dates for account created after 2006 or so.
    • The log entry format for user permissions has changed multiple times, so the promotion times are missing or even incorrect for many entries until a few years back.

The data collection was made with two scripts I wrote for this purpose. The first script produces a tab separated file (tsv) containing all accounts which have been created. Each line has the following fields:

  1. username,
  2. time of account creation,
  3. number of edits,
  4. whether the user was approved as translator,
  5. time of approval and
  6. whether they used the regular sign-up process or the sandbox.

Some of the fields may be empty because the script was unable to find the data. User accounts for which we do not have account creation time are not listed. I chose not to try some methods which can be used to approximate the account creation time, because the data in that much past is too unreliable to be useful.

The first script takes a couple of minutes to run at, so I split further processing to a separate script to avoid doing the slow data fetching many times. The second script calculates a few additional values like average and median time for approval and aggregates the data per month.

The data also includes translators who signed up through the sandbox, but got rejected: this information is important for approval rate calculation. For them, we do not know the exact registration date, but we use the time they were rejected instead. This has a small impact on monthly numbers, if a translator registers in one month and gets rejected in a later month. If the script is run again later, numbers for previous months might be somewhat different. For approval times there is no such issue.


Account creations and approved translators at

Image 1: Account creations and approved translators at

Image 1 displays all account creations at as described above, simply grouped by their month of account creation.

We can see that approval rate has gone down over time. I assume this is caused by spam bot accounts. We did not exclude them hence we cannot tell whether the approval rate has gone up or down for human users.

We can also see that the number of approved translators who later turn out to be prolific translators has stayed pretty much constant each month. A prolific translator is an approved translator who has made at least 100 edits. The edits can be from any point of time, the script is just looking at current edit count so the graph above doesn’t say anything about wiki activity at any point in time.

There is an inherent bias towards old users for two reasons. First, at the beginning translators were basically invited to a new tool from existing methods they used, so they were likely to continue to translate with the new tool. Second, new users have had less time to reach 100 edits. On the other hand, we can see that a dozen translators even in the past few months have already made over 100 edits.

I have collected some important events below, which I will then compare against the chart.

  • 2009: Translation rallies in August and December.
  • 2010-02: The special page to assist in filing translator requests was enabled.
  • 2010-04: We created a new (now old) main page.
  • 2010-10: Translation rally.
  • 2011: Translation rallies in April, September and December.
  • 2012: Translation rallies in August and December.
  • 2013-12: The sandbox sign-up process was enabled.

There is an increase in account creations and approved translators a few months after the assisting special page was enabled. The explanation of this is likely to be the new main page which had a big green button to access the special page. The September translation rally in 2011 seems to be very successful in requiting new translators, but also the other rallies are visible in the chart.

Image 2: How long it takes for account creation to be approved.

Image 2: How long it takes for account creation to be approved.

The second image shows how long it takes from the account creation for a site administrator to approve the request. Before sandbox, users had to submit a request to become translators on their own: the time for them to do so is out of control of the site administrators. With sandbox, that is much less the case, as users get either approved or rejected in a couple of days. Let me give an overview of how the sandbox works.

All users in the sandbox are listed on a special page together with the sandbox translations they have made. The administrators can then approve or reject the users. Administrators usually wait until the user has made a handful translations. Administrators can also send email reminders for the users to make more translations. If translators do not provide translations within some time, or the translations are very bad, they will get rejected. Otherwise they will be approved and can immediately start using the full translation interface.

We can see that the median approval time is just a couple of hours! The average time varies wildly though. I am not completely sure why, but I have two guesses.
First, some very old user accounts have reactivated after being dormant for months or years and have finally requested translator rights. Even one of these can skew the average significantly. On a quick inspection of the data, this seems plausible.
Second, originally we made all translators site administrators. At some point, we introduced the translator user group, and existing translators have gradually been getting this new permission as they returned to the site. The script only counts the time when they were added to the translator group.
Alternatively, the script may have a bug and return wrong times. However, that should not be the case for recent years because the log format has been stable for a while. In any case, the averages are so big as to be useless before the year 2012, so I completely left them out of the graph.

The sandbox has been in use only for a few months. For January and February 2014, the approval rate has been slightly over 50%. If a significant portion of rejected users are not spam bots, there might be a reason for concern.

Suggested action points

  1. Store the original account creation date and “sandbox edit count” for rejected users.
  2. Investigate the high rejection rate. We can ask the site administrator why about a half of the new users are rejected. Perhaps we can also have “mark as spam” action to get insight whether we get a lot of spam. Event logging could also be used, to get more insight on the points of the process where users get stuck.

Source material

Scripts are in Gerrit. Version ’2′ of the scripts was used for this blog post. Processed data is in a Libre Office spreadsheet. Original and updated data is available on request, please email me.

by Niklas Laxström at March 03, 2014 04:46 PM

February 21, 2014

Riku Voipio

Where the armel buildd time went

Wanna-build, wanna-build, which packages spent most time on armel buildd's since beginning of 2013?

package | sum(build_time)
libreoffice | 114 09:16:34
linux | 113 02:58:50
gcc-4.8 | 064 01:21:09
webkitgtk | 059 19:09:27
acl2 | 043 16:40:50
gcc-4.7 | 028 14:03:53
iceweasel | 026 19:02:13
gcc-snapshot | 026 01:31:21
openjdk-7 | 020 02:41:53
php5 | 019 16:13:22
llvm-toolchain-3.3 | 017 19:05:38
qt4-x11 | 017 02:57:09
espresso | 016 03:50:37
pypy | 015 07:07:25
icedove | 014 18:57:08
insighttoolkit4 | 014 17:16:43
qtbase-opensource-src | 014 12:39:09
llvm-toolchain-3.4 | 012 03:06:15
mono | 011 22:30:13
atlas | 011 20:40:54
qemu | 011 17:11:09
calligra | 011 16:05:55
gnuradio | 011 15:19:35
resiprocate | 011 10:14:56
llvm-toolchain-snapshot | 011 02:04:44
libav | 010 13:52:03
python2.7 | 009 18:58:33
ghc | 009 18:28:48
gnat-4.8 | 009 13:59:57
axiom | 009 12:40:24
cython | 009 00:47:04
openjdk-6 | 008 16:38:14
oce | 008 10:29:20
eglibc | 008 06:04:26
ppl | 007 20:48:45
root-system | 007 17:32:16
openturns | 007 10:12:53
gcl | 007 08:02:42
gcc-4.6 | 007 02:50:48
k3d | 007 00:36:11
python3.3 | 007 00:25:42
llvm-toolchain-3.2 | 007 00:17:59
vtk | 006 17:53:28
samba | 006 17:17:27
mysql-workbench | 006 14:36:46
kde-workspace | 006 07:31:12
gmsh | 006 04:32:42
psi-plus | 006 04:30:08
octave | 006 04:17:22
paraview | 006 04:13:25
Timeformat is "days HH:MM:SS". Our ridiculously stable mv78x00 buildd's have served well, but has come to become let them rest. Now, to find out how many of these top time consuming packages can build with parallel make and are not doing so already.

by Riku Voipio ( at February 21, 2014 01:32 PM

February 06, 2014

Henri Bergius

Full-Stack Flow-Based Programming

The idea of Full-Stack Development is quite popular at the moment — building things that run both the browser and the server side of web development, usually utilizing similar languages and frameworks.

With Flow-Based Programming and the emerging Flowhub ecosystem, we can take this even further. Thanks to the FBP network protocol we can build and monitor graphs spanning multiple devices and flow-based environments.

Jon Nordby gave a Flow-Based Programming talk in FOSDEM Internet of Things track last weekend. His demo was running a FBP network comprising of three different environments that talk together. You can find the talk online.

Here are some screenshots of the different graphs.

MicroFlo running on an Arduino Microcontroller and monitoring a temperature sensor:

MicroFlo on Arduino

NoFlo running on Node.js and communicating with the Arduino over a serial port:

NoFlo on Node.js

NoFlo running in browser and communicating with the Node.js process over WebSockets:

NoFlo on browser

(click to see the full-size picture)

Taking this further

While this setup already works, as you can see the three graphs are still treated separately. The next obvious step will be to utilize the subgraph features of NoFlo UI and allow different nodes of a graph represent different runtime environments.

This way you could introspect the data passing through all the wires in a single UI window, and "zoom in" to see each individual part of the system.

The FBP ecosystem is growing all the time, with different runtimes popping up for different languages and use cases. While NoFlo's JavaScript focus makes it part of the Universal Runtime, there are many valid scenarios where other runtimes would be useful, especially on mobile, embedded, and desktop.

Work to be done

Interoperability between them is an area we should focus on. The network protocol needs more scrutiny to ensure all scenarios are covered, and more of the FBP/dataflow systems need to integrate it.

Some steps are already being taken in this direction. After Jon's session in FOSDEM we had a nice meetup discussing better integration between MicroFlo on microcontrollers, NoFlo on browser and server, and Lionel Landwerlin's work on porting NoFlo to the GNOME desktop.

Full-stack FBP discussions at FOSDEM 2014

If you're interested in collaborating, please get in touch!

Photo by Forrest Oliphant.

by Henri Bergius ( at February 06, 2014 12:00 AM

January 08, 2014

Niklas Laxström

First day at work

Officially I started January 1st, but apart from getting an account today was the first real thing at the university. Still feels great – the “oh my what did I sign up to” feeling has still time to come. ;)

After having the WMF daily standup, I have a usual breakfast and head to city center, where our research group of four had a meeting. To my surprise, the eduroam network worked immediately. I had configured it at home earlier based on a guide on the site of some university of Switzerland, if I remember correctly: my university didn’t provide good help for how to set it up with Fedora and KDE.

Institute of Behavioural Sciences, University of Helsinki

The building on the left is part of Institute of Behavioural Sciences. It is just next to the building (not visible) where I started my university studies in 2005. (Photo CC BY-NC-ND by Irmeli Aro.)

On my side, preparations for the IWSDS conference are now the highest priority. I have until Monday to prepare my first ever poster presentation. I found PowerPoint and InDesign templates from the university’s website (ugh proprietary tools). Then there are few days to get it printed before I fly on Thursday. After the travel I will make a website for the project to allow it to get some visibility and find out about the next steps as well as how to proceed with studies.

After this topic, I got to hear about other part of the research, collection of data in Sami languages. I connected them with Wikimedia Suomi who has expressed interest to work with Sami people.

After the meeting, we went hunting for so-called WBS codes which are needed in various places to target the expenses, for example for poster printing and travel plans. (In case someone knows where the abbreviation WBS comes from, there are at least two people in the world who are interested to know.) The people I met there were all very friendly and helpful.

On my way home I met an old friend from Päivölä&university (Mui Jouni!) in the metro. There was also a surprise ticket inspection – 25% inspection rate for my trips this year based on 4 observations. I guess I need more observations before this is statistically significant ;)

One task left for me when I got home was to do the mandatory travel plan. This needs to be done through university’s travel management software, which is not directly accessible. After trying without success to access it first through their web based VPN proxy, second with openvpn via NetworkManager via “some random KDE GUI for that” on my laptop and, third, even with a proprietary VPN application on my Android phone I gave up for today – it’s likely that the VPN connection itself is not the problem and the issue is somewhere else.

It’s still not known from where I will get a room (I’m employed in a different department from where I’m doing my PhD). Though I will likely work from home often as I am used to.

by Niklas Laxström at January 08, 2014 09:10 PM

January 05, 2014

Niklas Laxström

MediaWiki i18n explained: {{PLURAL}}

This post explains how MediaWiki handles plural rules to developers who need to work with it. In other words, how a string like “This wiki {{PLURAL:$1|0=does not have any pages|has one page|has $1 pages}}” becomes “This wiki has 425 pages”.


As mentioned before we have adopted a data-based approach. Our plural rules come from Unicode CLDR (Common Locale Data repository) in XML format and are stored in languages/data/plurals.xml. These rules are supplemented by local overrides in languages/data/plurals-mediawiki.xml for languages not supported by CLDR or where we are yet to unify our existing local rules to match CLDR rules.

As a short recap, translators handle plurals by writing all possible different forms explicitly. That means that there are different forms for singular, dual, plural, etc., depending on what grammatical numbers the language has. There might be more forms because of other grammatical reasons, for example in Russian the grammatical case of the noun varies depending on the number. The rules from CLDR put all numbers into different boxes, each box corresponding to one form provided by the translator.


The plural rules are stored in localisation cache (not to be confused with message cache and many other caches in MediaWiki) with other language specific localisation data. The localisation cache can be stored in different places depending on configuration. The default is to use the SQL database, but they can also be in CDB files as they are at the Wikimedia Foundation and

The whole process starts 1) when the user runs php maintenance/rebuildLocalisationCache.php, or 2) during a web request, if the cache is stale and automatic cache rebuild is allowed (as by default).

The code proceeds as follows:


  • LocalisationCache::getPluralRules fills pluralRules
    • LocalisationCache::loadPluralFiles loads both xml files, merges them and stores the result in in-process cache
  • LocalisationCache::getComplisedPluralRules fills compiledPluralRules
    • LocalisationCache::loadPluralFiles returns rules from in-process cache
    • CLDRPluralRuleEvaluator::compile compiles the standard notation into RPN notation
  • LocalisationCache::getPluralTypes fills pluralRuleTypes

So now for the given language we have three lists (see table 1). The pluralRules are used in frontend (JavaScript) and the compiledPluralRules are used in the backend (PHP) with a custom evaluator. Tim Starling wrote the custom evaluator for performance reasons. The pluralRuleTypes stores the map between numerical indexes and CLDR keywords, which are not used in MediaWiki plural syntax. Please note that Russian has four plural forms: the fourth form, called other, is used when none of the other rules match and is not stored anywhere.

Table 1: Stored plural data for Russian
pluralRuleTypes pluralRules compiledPluralRules
“one” “n mod 10 is 1 and n mod 100 is not 11″ “n 10 mod 1 is n 100 mod 11 is-not and”
“few” “n mod 10 in 2..4 and n mod 100 not in 12..14″ “n 10 mod 2 4 .. in n 100 mod 12 14 .. not-in and”
“many” “n mod 10 is 0 or n mod 10 in 5..9 or n mod 100 in 11..14″ “n 10 mod 0 is n 10 mod 5 9 .. in or n 100 mod 11 14 .. in or”

The cache also stores the magic word PLURAL, defined in languages/messages/MessageEn.php and translated to other languages, so in Finnish language wikis they can use {{MONIKKO:$1|$1 talo|$1 taloa}} if they so want. For compatibility reasons, in all interface translations these magic words are used in English.

Invocation on backend

There are roughly three ways to trigger plural parsing:

  1. using the plural syntax in a wiki page,
  2. calling the plural parser with Message object with output format text,
  3. using the plural syntax in a message with output format parse, which calls full wikitext parser as in 1.

In all cases, we will get into Parser::replaceVariables, which expands all magic words and templates (anything enclosed in double braces; sometimes also called {{ constructs). It will load the possible translated magic words and see if the {{thing}} in the wikitext or message matches a known magic word. If not, the {{thing}} is considered a template call. If the plural magic word matches, the parser will call CoreParserFunctions::plural which will take the arguments, make them into an array, call the correct language object with Language::convertPlural( number, forms ): see table 2 for function call trace.

In the Language class we first handle explicit plural forms explained in a previous post on explicit zero and one form. If any explicit plural form doesn’t match, they are removed and we will continue on with the other forms, calling Language::getPluralRuleIndexNumber( number ), which first loads the compiled plural rules into the in-process cache, then calls CLDRPluralRuleEvaluator::evaluateCompiled which returns the box the number belongs to. Finally we take the matching form given by the translator, or the last form provided. Then the return value is substituted in place of the plural magic word call.

Table 2: Function call list for plural magic word
Message::parse Message::text
  • Message::toString
  • Message::parseText
  • MessageCache::parse
  • Parser::parse
  • Parser::internalParse
  • Message::toString
  • Message::transformText
  • MessageCache::transform
  • Parser::transformMsg
  • Parser::preprocess
  • [The above lists converge here]
  • Parser::replaceVariables
  • PPFrame_DOM::expand
  • Parser::braceSubstitution
  • Parser::callParserFunction
  • call_user_func_array
  • CoreParserFunctions::plural
  • Language::convertPlural
  • [Plural rule evaluation]

Invocation on frontend

The resource loader module (implemented in class ResourceLoaderLanguageDataModule) is responsible for loading the plural rules from localisation cache and delivering them together with other language data to JavaScript.

The resource loader module mediawiki.jqueryMsg provides yet another limited wikitext parser which can handle plural, links and few other things. The module mediawiki (global mediaWiki, usually aliased to mw) provides the messaging interface with functions like mw.msg() or mw.message().text(). Those will not handle plural without the aforementioned mediawiki.jqueryMsg module. Translated magic words are not supported at the frontend.

If a plural magic word is found, then it will call the frontend convertPlural method. These are provided in few hops by the module mediawiki.language which depends on and mediawiki.cldr. The latter depends on mediawiki.libs.pluralruleparser, which evaluates the (non-compiled) CLDR plural rules to reach the same result as in the PHP side and is hosted at GitHub, written by Santhosh Thottingal of the Wikimedia Language Engineering team.

by Niklas Laxström at January 05, 2014 08:46 PM

December 20, 2013

Riku Voipio

Replicant on Galaxy S3

I recently got my self and Galaxy S3 for testing out Replicant, an android image made out of only open source components.

Why Galaxy S3?

It is well supported in Replicant, almost every driver is already open source. The hardware specs are acceptable, 1.4Ghz quad core, 1GB ram, microsd, and all the peripheral chips one expects for a phone. Galaxy S3 has sold insanely (50 million units supposedly), meaning I won't run out of accessories and aftermarket spare parts any time soon. The massive installed base also means a huge potential user community. S3 is still available as new, with two years of warranty.

Why not

While the S3 is still available new, it is safe to assume production is ending already - 1.5 year old product is ancient history in mobile world! It remains to be seen how much the massive user base will defend against the obsolescence. Upstream kernel support for "old" cpu is open question, replicant is still basing kernel on vendor kernel. Bootloader is unlocked, but it can't be changed due to trusted^Wtreacherous computing, preventing things like boot from sd card. Finally, not everything is open source, the GPU (mali) driver while being reverse engineered, is taking it's time - and the GPS hasn't been reversed yet.

Installing replicant

Before install, from the original installation, you might want to take a copy of firmware files (since replicant won't provide them). enable developer mode on the S3 and:
sudo apt-get install android-tools
mkdir firmware
adb pull /system/vendor/firmware/
adb pull /system/etc/wifi
After then, just follow official replicant install guide for S3. If you don't mind closed source firmwares, post-install you need to push the firmware files back:

adb shell
mount -o remount,rw /system
adb push . /system/vendor/firmware
Here was my first catch, the wifi firmwares from jelly bean based image were not compatible with older ICS based replicant.

Using replicant

Booting to replicant is fast, few seconds to the pin screen. You are treated with the standard android lockscreen, usual slide/pin/pattern options are available. Basic functions like phone, sms and web browsing have icons from the homescreen and work without a hitch. Likewise camera seems to work, really the only smartphone feature missing is GPS.

Sidenote - this image looks a LOT better on the S3 than on my thinkpad. No wonder people are flocking to phones and tablets when laptop makers use such crappy components.
The grid menu has the standard android AOSP opensource applications in the ICS style menu with the extra of f-droid icon - which is the installer for open source applications. F-droid is it's own project that complements replicant project by maintaining a catalog of Free Software.
F-droid brings hundreds of open source applications not only for replicant, but for any other android users, including platforms with android compatibility, such as Jolla's Sailfish OS. Of course f-droid client is open source, like the f-droid server (in Debian too). F-droid server is not just repository management, it can take care of building and deploying android apps.
The WebKit based android browser renders web sites without issues, and if you are not happy with, you can download Firefox from f-droid. Many websites will notice you are mobile, and provide mobile web sites, which is sometimes good and sometimes annoying. Worse, some pages detect you are android and only offer you to load their closed android app for viewing the page. OTOH I am already viewing their closed source website, so using closed source app to view it isn't much worse.

This keyboard is again the android standard one, but for most unixy people the hacker's keyboard with arrow buttons and ctrl/alt will probably be the one you want.

Closing thoughts

While using replicant has been very smooth, the lack of GPS is becoming a deal-breaker. I could just copy the gpsd from cyanogen, like some have done, but it kind of beats the purpose of having replicant on the phone. So it might be that I move back to cyanogen, unless I find time to help reverse engineering the BCM4751 GPS.

by Riku Voipio ( at December 20, 2013 08:41 PM

December 18, 2013


Kolme syytä liittyä Suomen avointen tietojärjestelmien keskus COSSiin

Kuka valvoo Ubuntun ja muiden avoimen lähdekoodin ohjelmistojen käyttäjien etuja? Mikä järjestö on pääasiallinen avoimen lähdekoodin markkinoija ja edistäjä Suomessa? Vastaus on COSS ry.

Kolme syytä tukea COSSia

COSS logo

  1. COSS lisää tietoisuutta avoimesta lähdekoodista, erityisesti julkishallinnossa
  2. COSS edistää suomalaisen IT-alan kasvua ja työllisyyttä kiiihdyttämällä suomalaislähtöisen teknologian menestystä
  3. COSS lisää avoimen lähdekoodin osaamista koulutuksilla, tapahtumillla ja verkostoitumismahdollisuuksilla

Liity COSS ry:n kannatusjäseneksi! →

Mikä on COSS?

Suomen avoimien tietojärjestelmien keskus – COSS ry on voittoa tavoittelematon yhdistys, joka toimii avoimen lähdekoodin, avoimen datan, avoimien rajapintojen sekä avoimien standardien edistämiseksi.

COSS on toiminut jo vuodesta 2003 ja se tunnetaan kansainvälisesti yhtenä Euroopan vanhimmista ja aktiivisimmista avoimuuden keskuksista.

COSS edistää yhteistyötä sekä yhteisöjen, yritysten että julkishallinnon välillä ja mm. järjestää tapahtumia. COSSin sivuilta löytyy valtakunnallinen kalenteri alan kaikista tapahtumista:

Yhdistys työskentelee tiedottamalla ja valistamalla avoimuuden periaatteista, -käytänteistä ja -teknologioista. on Suomen suurin alan sivusto.

Esimerkkejä COSSin toiminnasta

  • Tukee julkishallintoa kaikissa tietojärjestelmien avoimuutta edistävissä pyrkimyksissä
  • Edistää avoimen lähdekoodin ratkaisuja, palveluja ja yritystoimintaa
    • Tilaisuuksien järjestäminen ja tukeminen
    • Tiedottaminen verkossa ja muissa medioissa
    • Aktiivisen yritysverkoston ylläpitäminen: COSSin jäseninä n. 100 suomalaista avoimen lähdekoodin yritystä
  • Edistää yritysten, tutkimuslaitosten ja korkeakoulujen välistä yhteistyötä
  • Edistää yritysten ja kehittäjäyhteistöjen välistä yhteistoimintaa
    • Lokalisointi-työryhmä suomentaa ohjelmistoja
    • Linux-tapahtumapäivien järjestäminen
    • Devaamo Summit -tapahtuman tukeminen
  • Ylläpitää yhteistyötä alan suomalaisten ja kansainvälisten järjestöjen ja yhteisöjen välillä
    • KDE-kehittäjien Akademy 2010 -tapahtuman järjestäminen Tampereella
    • Yhteistyö Linux Foundationin, Free Software Foundation Europen ja monen muun kanssa
  • Edistää avointa lähdekoodia, avoimia standardeja, avoimia rajapintoja ja avointa dataa
  • Jakaa vuosittain Open World Hero -palkinnon

Liity COSS ry:n kannatusjäseneksi! →

Autathan COSSia saamaan lisää tukijoita jakamalla tätä viestiä sosiaalisessa mediassa!

by Otto at December 18, 2013 01:29 PM

November 27, 2013


Jolla launch party

And then for something completely different, I've my hands on Jolla now, and it's beautiful!

A quick dmesg of course is among first things to do...
[    0.000000] Booting Linux on physical CPU 0
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version (abuild@es-17-21) (gcc version 4.6.4 20130412 (Mer 4.6.4-1) (Linaro GCC 4.6-2013.05) ) #1 SMP PREEMPT Mon Nov 18 03:00:49 UTC 2013
[ 0.000000] CPU: ARMv7 Processor [511f04d4] revision 4 (ARMv7), cr=10c5387d
[ 0.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache
[ 0.000000] Machine: QCT MSM8930 CDP
... click for the complete file ...
And what it has eaten: Qt 5.1!
... click for the complete file ...
It was a very nice launch party, thanks to everyone involved.

Update: a few more at my Google+ Jolla launch party gallery

by Timo Jyrinki ( at November 27, 2013 08:10 PM

Workaround for setting Full RGB when Intel driver's Automatic setting does not work


I upgraded from Linux 3.8 to 3.11 among with newer Mesa, X.Org and Intel driver recently and I found a small workaround was needed because of upstream changes.

The upstream change was the Add "Automatic" mode for "Broadcast RGB" property, and defaulting to the Automatic. This is a sensible default, since many (most?) TVs default to the more limited 16-235, and continuing to default to Full from the driver side would mean wrong colors on the TV. I've set my screen to support the full 0-255 range available to not cut the amount of available shades of colors down.

Unfortunately it seems the Automatic setting does not work for my HDMI input, ie blacks become grey since the driver still outputs the more limited range. Maybe there could be something to improve on the driver side, but I'd guess it's more about my 2008 Sony TV actually having a mode that the standard suggests limited range for. I remember the TV did default to limited range, so maybe the EDID data from TV does not change when setting the RGB range to Full.

I hope the Automatic setting works to offer full range on newer screens and the modes they have, but that's probably up to the manufacturers and standards.

Below is an illustration of the correct setting on my Haswell CPU. When the Broadcast RGB is left to its default Automatic setting, the above image is displayed. When set to Full, the image below with deeper blacks is seen instead. I used manual settings on my camera so it's the same exposure.


For me the workaround has evolved to the following so far. Create a /etc/X11/Xsession.d/95fullrgb file:
if [ "$(/usr/bin/xrandr -q --prop | grep 'Broadcast RGB: Full' | wc -l)" = "0" ] ; then
/usr/bin/xrandr --output HDMI3 --set "Broadcast RGB" "Full"
And since I'm using lightdm, adding the following to /etc/lightdm/lightdm.conf means the flicker only happens once during bootup:


Important: when using the LightDM setting, enable executable bits (chmod +x) to /etc/X11/Xsession.d/95fullrgb for it to work. Obviously also check your output, for me it was HDMI3.

If there is no situation where it'd set back to "Limited 16:235" setting on its own, the display manager script should be enough and having it in /etc/X11/Xsession.d is redundant and slows login time down. I think for me it maybe went from 2 seconds to 3 seconds since executing xrandr query is not cheap.


Note that unrelated to Full range usage, the Limited range at the moment behaves incorrectly on Haswell until the patch in bug #71769 is accepted. That means, the blacks are grey in Limited mode even if the screen is also set to Limited.

I'd prefer there would be a kernel parameter for the Broadcast RGB setting, although my Haswell machine does boot so fast I don't get to see too many seconds of wrong colors...

by Timo Jyrinki ( at November 27, 2013 08:50 AM

September 16, 2013


Ubuntu-asennustyöpaja torstaina 19.9. Helsingissä

Software Freedom Day (Avointen ohjelmien päivä) on kansainvälinen tapahtuma, jota vietetään 286 tapahtumalla ympäri maailmaa tänä vuonna. Päivän tarkoitus on lisätä avoimen lähdekoodin ohjelmistojen tunnettuutta.

Software Freedom Day 2013Mikä avoin ohjelma?

Avoimella ohjelmalla tarkoitetaan ohjelmaa, jonka lisenssi takaa käyttäjille neljä perusvapautta:

  1. käyttää ohjelmaa rajoituksetta
  2. tutkia ohjelman toimintaa (lähdekoodista)
  3. muokata ohjelmaa ja tehdä uusia versioita
  4. jakaa ohjelmaa edelleen kenelle tahansa

Tunnettuja avoimen lähdekoodin ohjelmistoja ovat mm. Android-käyttöjärjestelmä, Firefox- ja Chromium-nettiselaimet, VLC-mediasoitin ja LibreOffice-toimisto-ohjelmisto. Tutkimusten mukaan, laajasta käytöstä huolimatta, kuitenkin vasta kolmasosa suomalaisista tietää mitä avoin lähdekoodi on.

Tapahtumalla halutaan nostaa ihmisten tietoisuuteen ohjelmistojen alkuperät, kehitysmallit sekä tekijöiden ja kehittäjien intressit. Tapahtumat järjestäjät uskovat, että avoimet ohjelmat ovat sekä eettisesti että teknisesti parempi vaihtoehto kuin suljetut ohjelmat.

Avoin lähdekoodi on myös taloudellisesti merkittävä asia. Esimerkiksi tunnetut verkkopalvelut kuten Google, Facebook ja Twitter toimivat avoimen lähdekoodin palvelinohjelmistojen avulla, eikä palveluita olisi syntynyt ilman avoimia ohjelmistoja.

Avoin lähdekoodi on erityisen ajankohtaista siksi, että se on ainoa keino suojautua vakoilukäyttöön tehtyjä takaporteja vastaan. Lähdekoodin avoimuus mahdollistaa ohjelman toiminnan yksityiskohtaisen tutkimisen.

Varsinkin Suomessa avoimen lähdekoodin tulisi olla tunnettua, onhan moni avointen ohjelmistojen pääkehittäjä suomalainen. Esimerkkeinä mainitaakoon Linus TorvaldsMichael “Monty” Widenius ja Timo Sirainen.

Suomessa kokoonnutaan Helsingissä

Suomessa tapahtumaa vietetään nimellä Avointen ohjelmien päivä, joka on kaikelle yleisölle avoin ja ilmainen tilaisuus torstaina 19.9. klo 17:30 alkaen Helsingissä.

Avointen ohjelmien päivän tilaisuus alkaa esittelyllä mitä avoimet ohjelmat ovat ja mistä niitä löytää. Tämän jälkeen tapahtuma jatkuu asennustyöpajana, jossa paikalla olevat asiantuntijat auttavat ohjelmien asennuksessa yleisön omiin tietokoneisiin. Tarjolla on sekä Ubuntu-asennuksia, että myös muita Linux-jakeluja sekä myös VALO-ohjelmia Windowsiin ja Maciin. Tilaisuus on helpoin tapa tutustua ja päästä käyttämään avoimia ohjelmia.

Tarkempi ohjelma ja ilmoittautumislomake löytyvät verkkosivuilta.

Tilaisuuden järjestää yhteistyössä Euroopan vapaiden ohjelmistojen säätiön (FSFE) Suomen tiimi sekä Suomen avointen tietojärjestelmien keskus COSS ry.

Suomen tapahtuman sivut:

Kansainvälisen tapahtuman sivut:

Liity COSSin jäseneksi

Jos haluat tukea tätä sekä muuta toimintaa avoimen lähdekoodin edistämiseksi Suomessa, liity COSSin jäseneksi osoitteessa

by Otto at September 16, 2013 12:37 PM

August 15, 2013

Aapo Rantalainen

Tikkupeli ja matematiikkaa.

Kumpi voittaa 7531-tikkupelin? Miksi?
Säännöt kahdelle pelaajalle.
Alkuasetelma: Tikut ovat riveittäin, ensimmäisellä rivillä 7 tikkua, seuraavalla 5, sitten 3 ja viimeisellä 1.

Vuorollaan pelaaja valitsee yhden rivin ja poistaa siltä haluamansa määrän tikkuja. Kuitenkin ainakin yhden. Halutessaan vaikka kaikki (eikä tietenkään enempää kuin mitä rivillä on).

Häviäjä on se pelaaja joka joutuu ottamaan pelilaudalta viimeisen tikun.

Saanko aloitusvuoron vai haluatko sinä aloittaa pelin? Todista.


Pysähdy tähän miettimään.

Vastaus alkaa:

Otetaan käyttöön merkintätapa, jossa jokainen pelitilanne kuvataan neljällä numeromerkillä. Koska tikkurivien järjestyksellä ei ole merkitystä, sovitaan että numeromerkit ovat aina suurimmasta pienimpään. Eli tila 2100 = 2010 = 2001 = 1200 = 1020 = 0120 = 0210 = 0201 = 0021. Merkitään näitä kaikkia tiloja 2100:lla
Nyt pelin häviämisehto on:

Määritelmä I
Pelaaja häviää jos hänelle tulee tila 1000.
(Reunahuomautus: Jos jättäisimme nollat kokonaan merkitsemättä, tikkurivien määrä aloituksessa voisi olla jokin muukin kuin neljä.)

Tässä seuraa matemaattinen todistus. ‘Lemma’ on siis ‘apulause’. Koodarit voi ajatella sen funktiokutsuna (älä sotke matematiikan funktioihin). Määrittelen aina ensin lemman, ennen kuin käytän sitä, jotta ei varmasti synny kehäpäätelmiä.

Väitän että aloittaja häviää aina (tämä selviää todistuksen lopusta vasta).
Väite: Kaikille (eli ∀) aloittajan siirroille löytyy (eli ∃) vastustajalta vastine, jolla aloittaja häviää.

Lemma 1110: vuorossa oleva pelaaja häviää, jos hänelle tulee tila 1110.
Todistus: Tekee pelaaja minkä tahansa siirron, niin seuraava tila on 1100.
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 2200: häviää
Todistus: (Pelaaja voi ottaa jommalta kummalta riviltä joko yhden tai kaksi tikkua.)
Voi päätyä tilaan
a) 2100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.
b) 2000
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 2211: häviää
Todistus: Voi päätyä tilaan
a) 2210
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
b) 2111
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
c) 2110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.

Lemma 3210: häviää
Todistus: Voi päätyä tilaan
a) 3200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
b) 3110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
c) 2210
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
d) 2110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
e) 2100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 3300: häviää
Todistus: Voi päätyä tilaan
a) 3200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
b) 3100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.
c) 3000
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 3311: häviää
Todistus: Voi päätyä tilaan
a) 3310
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
b) 3211
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
c) 3111
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
d) 3110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.

Lemma 4400: häviää
Todistus: Voi päätyä tilaan
a) 4300
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
b) 4200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
c) 4100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.
d) 4000
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 4411: häviää
Todistus: Voi päätyä tilaan
a) 4410
Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää.
b) 4311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
c) 4211
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
d) 4111
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
e) 4110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.

Lemma 5500: häviää
Todistus: Voi päätyä tilaan
a) 5400
Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää.
b) 5300
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
c) 5200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
d) 5100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.
e) 5000
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 5410: häviää
Todistus: Voi päätyä tilaan
a) 5400
Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää.
b) 5310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
c) 5210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
d) 5110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
e) 5100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.
f) 4410
Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää.
g) 4310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
h) 4210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
i) 4110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
j) 4100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 5511: häviää
Todistus: Voi päätyä tilaan
a) 5510
Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää.
b) 5411
Josta vastustaja palauttaa 4411. Lemman 4411 mukaan häviää.
c) 5311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
d) 5211
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
e) 5111
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
f) 5110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.

Lemma 6420: häviää
Todistus: Voi päätyä tilaan
a) 6410
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
b) 6400
Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää.
c) 6320
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
d) 6220
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
e) 6210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
f) 6200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
g) 5420
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
h) 4420
Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää.
i) 4320
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
j) 4220
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
k) 4210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
l) 4200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.

Lemma 6431: häviää
Todistus: Voi päätyä tilaan
a) 6430
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
b) 6421
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
c) 6411
Josta vastustaja palauttaa 4411. Lemman 4411 mukaan häviää.
d) 6410
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
e) 6331
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
f) 6321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
g) 6311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
h) 6310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
i) 5431
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
j) 4431
Josta vastustaja palauttaa 4411. Lemman 4411 mukaan häviää.
k) 4331
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
l) 4321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
m) 4311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
n) 4310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.

Lemma 6521: häviää
Todistus: Voi päätyä tilaan
a) 6520
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
b) 6511
Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää.
c) 6510
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
d) 6421
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
e) 6321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
f) 6221
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
g) 6211
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
h) 6210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
i) 5521
Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää.
j) 5421
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
k) 5321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
l) 5221
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
m) 5211
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
n) 5210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.

Lemma 6530: häviää
Todistus: Voi päätyä tilaan
a) 6520
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
b) 6510
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
c) 6500
Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää.
d) 6430
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
e) 6330
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
f) 6320
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
g) 6310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
h) 6300
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
i) 5530
Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää.
j) 5430
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
k) 5330
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
l) 5320
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
m) 5310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
n) 5300
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.

VÄITE: Tilanteesta 7531 häviää.
Todistus: Voi päätyä tilaan
a) 7530
Josta vastustaja palauttaa 6530. Lemman 6530 mukaan häviää.
b) 7521
Josta vastustaja palauttaa 6521. Lemman 6521 mukaan häviää.
c) 7511
Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää.
d) 7510
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
e) 7431
Josta vastustaja palauttaa 6431. Lemman 6431 mukaan häviää.
f) 7331
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
g) 7321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
h) 7311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
i) 7301
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
j) 6531
Josta vastustaja palauttaa 6431. Lemman 6431 mukaan häviää.
k) 5531
Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää.
l) 5431
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
m) 5331
Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää.
n) 5321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
o) 5311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
p) 5310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.

by Aapo Rantalainen at August 15, 2013 08:30 PM

July 10, 2013


Latest Compiz gaming update to the Ubuntu 12.04 LTS

A new Compiz window manager performance update reached Ubuntu 12.04 LTS users last week. This completes the earlier [1] [2] enabling of 'unredirected' (compositing disabled) fullscreen gaming and other applications for performance benefits.

The update has two fixes. The first one fixes a compiz CPU usage regression. The second one enables unredirection also for Intel and Nouveau users using the Mesa 9.0.x stack. That means up-to-date installs from 12.04.2 LTS installation media and anyone with original 12.04 LTS installation who has opted in to the 'quantal' package updates of the kernel, X.Org and mesa *)

The new default setting for the unredirection blacklist is shown in the image below (CompizConfig Settings Manager -> General -> OpenGL). It now only blacklists the original Mesa 8.0.x series for nouveau and intel, plus the '9.0' (not a point release).

I did new runs of OpenArena at from a 12.04.2 LTS live USB. For comparison I first had a run with the non-updated Mesa 9.0 from February. I then allowed Ubuntu to upgrade the Mesa to the current 9.0.3, and ran the test with both the previous version of Compiz and the new one released.

12.04.2 LTS    Mesa 9.0   | Mesa 9.0.3 | Mesa 9.0.3
               old Compiz | old Compiz | new Compiz
OpenArena fps    29.63    |   31.90    | 35.03     

Reading into the results, Mesa 9.0.3 seems to have improved the slowdown in the redirected case. That would include normal desktop usage as well. Meanwhile the unredirected performance remains about 10% higher.

*) Packages linux-generic-lts-quantal xserver-xorg-lts-quantal libgl1-mesa-dri-lts-quantal libegl1-mesa-drivers-lts-quantal. 'raring' stack with Mesa 9.1 and kernel 3.8 will be available around the time of 12.04.3 LTS installation media late August.

by Timo Jyrinki ( at July 10, 2013 12:01 PM

May 21, 2013


Network from laptop to Android device over USB

If you're running an Android device with GNU userland Linux in a chroot and need a full network access over USB cable (so that you can use your laptop/desktop machine's network connection from the device), here's a quick primer on how it can be set up.

When doing Openmoko hacking, one always first plugged in the USB cable and forwarded network, or like I did later forwarded network over Bluetooth. It was mostly because the WiFi was quite unstable with many of the kernels.

I recently found out myself using a chroot on a Nexus 4 without working WiFi, so instead of my usual WiFi usage I needed network over USB... trivial, of course, except that there's Android on the way and I'm a Android newbie. Thanks to ZDmitry on Freenode, I got the bits for the Android part so I got it working.

On device, have eg. data/ with the following contents.

ip addr add dev usb0
ip link set usb0 up
ip route delete default
ip route add default via;
setprop net.dns1
echo 'nameserver' >> $CHROOT/run/resolvconf/resolv.conf
On the host, execute the following:
adb shell setprop sys.usb.config rndis,adb
adb shell data/
sudo ifconfig usb0
sudo iptables -A POSTROUTING -t nat -j MASQUERADE -s
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
sudo iptables -P FORWARD ACCEPT
This works at least with Ubuntu saucy chroot. The main difference in some other distro might be whether the resolv.conf has moved to /run or not. You should be now all set up to browse / apt-get stuff from the device again.

Update: Clarified that this is to forward the desktop/laptop's network connection to the device so that network is accessible from the device over USB.
Update2, 09/2013: It's also possible to get working on the newer flipped images. Remove the "$CHROOT" from nameserver echoing and it should be fine. With small testing it got somehow reset after a while at which point another run of data/ on the device restored connection.

by Timo Jyrinki ( at May 21, 2013 12:20 PM

April 23, 2013


Kokemuksia Dell XPS 13 Ubuntu-kannettavasta

Dell XPS 13 Ubuntu Edition

Ubuntua kehittävän Canonicalin ja Dellin yhteistyönä tehty Dell XPS 13 Ubuntu Edition on nyt julkaistu uudistettuna versiona, ja se on saatavilla myös Suomesta. Kannettava on saman tyylinen kuin MacBook Air 13, mutta se on sentin kapeampi ilman että näyttö tai näppäimistö olisi pienempi, ja moni asia on tehty paremmin, kuten esimerkiksi käyttöjärjestelmä :)

Nopean testauksen perusteella laite on erittäin hyvä. Nopea Intel i7-prosessori, 8 GB RAM-muistia ja 256 SSD-levy tekee kannettavasta erittäin nopean. Pohjasta löytyy tuuletin mutta se ei tavallisesti lainkaan pyöri, joten kannettava on käytännössä äänetön. Akkukesto on käytöstä riippuen 5-10 tuntia, ja virtalähdekin on niin pieni että se on kätevä kantaa mukana.

Laitteiston Linux-ajurit ovat luonnollisesti erinomaiset ja kaikki toimii kuten esiasennetulta Linux-kannettavalta voi olettaakin. Monet yksityiskohdat kuten taustavalaistu näppäimistö sekä kannettavanan sammutettunakin ollessa toimiva virrallinen USB-portti ja akun varaustason ilmaisin antavat muutenkin laadukkaan oloiselle alumiinista ja kevlarista tehdylle laitteelle lisäpisteitä. Paras ominaisuus on ehkä kuitenkin teräväpiirtonäyttö jonka resoluution on 1920×1080 pikseliä.

Dell XPS 13 Ubuntu Edition ja muita esiasennettuja Ubuntu-koneita voi ostaa tällä hetkellä vain yhdeltä Dellin jälleenmyyjältä suoraan, ja tarjouksen pyytämällä sen voi ostaa myös Pirkanmaan Konttorikone Oy:ltä tai jos on yritysasiakas, myös suoraan Delliltä. Laitetta ei ollut tilattavissa esim. Jimm’s PC:stä tai Gigantista edes tarjousta pyytämällä, mutta toivottavasti näin hyvä laite tulee laajemmin tarjolle.

Lisää valokuva ja yksityiskohtainen esittely löytyy Seravon blogista (englanniksi).

by Otto at April 23, 2013 09:44 AM

March 30, 2013

Jouni Roivas


Usually it's easy to get things working with Qt (, but recently I encoutered an issue when trying to implement simple component derived from QGraphicsWidget. My initial idea was to use QGraphicsItem, so I made this little class:

class TestItem : public QGraphicsItem
TestItem(QGraphicsItem *parent=0) : QGraphicsItem(parent) {}
void paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget = 0);
virtual QRectF boundingRect () const;

virtual void mousePressEvent(QGraphicsSceneMouseEvent *event);
virtual void mouseReleaseEvent(QGraphicsSceneMouseEvent *event);

void TestItem::mousePressEvent(QGraphicsSceneMouseEvent *event)
qDebug() << __PRETTY_FUNCTION__ << "press";

void TestItem::mouseReleaseEvent(QGraphicsSceneMouseEvent *event)
qDebug() << __PRETTY_FUNCTION__ << "release";

void TestItem::paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget)
painter->fillRect(boundingRect(), QColor(255,0,0,100));

QRectF TestItem::boundingRect () const
return QRectF(-100, -40, 100, 40);
Everything was working like expected, but in order to use a QGraphicsLayout, I wanted to derive that class from QGraphicsWidget. The naive way was to make minimal changes:

class TestWid : public QGraphicsWidget
TestWid(QGraphicsItem *parent=0) : QGraphicsWidget(parent) { }
void paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget = 0);
virtual QRectF boundingRect () const;

virtual void mousePressEvent(QGraphicsSceneMouseEvent *event);
virtual void mouseReleaseEvent(QGraphicsSceneMouseEvent *event);

void TestWid::mousePressEvent(QGraphicsSceneMouseEvent *event)
qDebug() << __PRETTY_FUNCTION__ << "press";

void TestWid::mouseReleaseEvent(QGraphicsSceneMouseEvent *event)
qDebug() << __PRETTY_FUNCTION__ << "release";

void TestWid::paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget)
painter->fillRect(boundingRect(), QColor(0,0,255,100));

QRectF TestWid::boundingRect () const
return QRectF(-100, -40, 100, 40);

Pretty straightforward, isn't it? It showed and painted things like expected, but I didn't get any mouse events. Wait what?

I spent hours just trying out things and googling this problem. I knew I had this very same issue earlier but didn't remember how I solved it. Until I figured out a very crucial thing, in case of QGraphicsWidget you must NOT implement boundingRect(). Instead use setGeometry for the object.

So the needed changes was to remote the boundingRect() method, and to call setGeometry in TestWid constructor:

setGeometry(QRectF(-100, -40, 100, 40));

After these very tiny little changes I finally got everthing working. That all thing made me really frustrated. Solving this issue didn't cause good feeling, I was just feeling stupid. Sometimes programming is a great waste of time.

by Jouni Roivas ( at March 30, 2013 01:57 PM

March 14, 2013


Ubuntu-kannettavia vihdoin myös Suomesta

Esiasennettuja, tuettuja Ubuntu-tietokoneita on myyty maailmalla jo kauan, erityisesti idässä päin miljoonia. Ubuntu-kumppaneita ovat Dell, Lenovo, Asus ja nykyään HP. Myös Euroopan maissa ainakin Dell ja Asus ovat käyneet kauppaa, mutta Suomessa merkkitietokoneiden Ubuntu-tarjonta on ollut vähissä.

Ostin 2,5 vuotta sitten Dell Latitude 2110 -miniläppärin Ubuntulla, mutta välissä on ollut kausia jolloin ainakaan Dell Suomen sivujen perusteella Ubuntua ei ole ollut valittavissa. Tarjontaa on kuitenkin viime aikoina taas ilmaantunut, eikä edes ihan tavallista tarjontaa.

Ansaitusti ensimmäisenä tulee mainita terävintä kärkeä oleva ultrabook “Dell XPS 13 developer edition” (tarkemmat tiedot katso PDF tällä sivulla), joka tosin muista Euroopan maista poiketen on tilattavissa Suomessa vain puhelimitse. XPS 13:n rinnalla kalpenee aika lailla kaikki kilpailevat ultrabookit – tarjolla on 13.3″ Full HD -näyttö IPS-tekniikalla ja Gorilla Glassilla, 8GB muistia, 256GB SSD-levy ja tietenkin laitteelle räätälöity Ubuntu 12.04 LTS.

Hieman tavallisempaa läppäritarjontaa edustaa Terasetin 15.6″-tuumaisella näytöllä varustettu Dell Vostro 2520. Se tarjoaa 2,2GHz i3-2228M -suorittimen, 4GB muistia, 320GB perinteisen kiintolevyn ja kaiken mitä nykyaikaiselta peruskannettavalta tarvitaan. Myös Vostron mukana toimitetaan Ubuntu 12.04 LTS.

Päivitys: Kuten kommenteissa on mainittu, moni muukin Dellin jälleenmyyjä kuten Pohjolan tietotekniikka, Data Group Hyvinkää, ja Jimm’s myyvät Dell-kannettavia ja -työasemia Ubuntu 12.04:llä nykyään.

Timo Jyrinki


by Timo Jyrinki at March 14, 2013 05:45 AM

January 13, 2013

Riku Leino

Työhuoneen kirjahyllyn siivous

Siivosin työhuoneeni kirjahyllyn. Paljon kirjoja meni roskiin, mutta muutama jäi jaettavaksi. Joukossa on todellisia ohjelmoinnin klassikoitakin. Lähettäkää sähköpostia, jos kiinnostutte kirjasta tai useammastakin.


by Tsoots at January 13, 2013 04:07 PM

August 31, 2012

Jouni Roivas

Adventures in Ubuntu land with Ivy Bridge

Recently I got a Intel Ivy Bridge based laptop. Generally I'm quite satisfied with it. Of course installed latest Ubuntu on it. First problem was EFI boot and BIOS had no other options. Best way to work around it was to use EFI aware grub2. I wanted to keep the preinstalled Windows 7 there for couple of things, so needed dual boot.

After digging around this German links was most relevant and helpful:

In the end all I needed to do was to install Grub2 to EFI boot parition (/dev/sda1 on my case) and create the grub.efi binary under that. Then just copy /boot/grub/grub.cfg under it as well. On BIOS set up new boot label to boot \EFI\grub\grub.efi

After using the system couple of days found out random crashes. The system totally hanged. Finally traced the problem to HD4000 graphics driver:

Needed to update Kernel. But which one? After multiple tries, I took the "latest" and "shiniest" one: With that kernel I got almost all the functionality and stability I needed.

However one BIG problem: headphones. I got sound normally from the speakers but after plugging in the headphones I got nothing. This problem seems to be on almost all the kernels I tried. Then I somehow figured out a important thing related to this. When I boot with headphone plugged in I got no sound from them. When I boot WITHOUT headphones plugged then they work just fine. Of course I debugged this problem all the time with the headphones plugged in and newer noticed this could be some weird detection problem. Since I kind of found solution for this I didn't bother to google it down. And of course Canonical does not provide support for unsupported kernels. If I remember correctly with the original Ubuntu 12.04 kernel this worked, but the HD4000 problem is on my scale bigger one than remember to boot without plugging anything to the 3.5" jack....

Of course my hopes are on 12.10 and don't want to dig it deeper, just wanted to inform you about this one.

by Jouni Roivas ( at August 31, 2012 07:57 PM

July 04, 2012

Ville-Pekka Vainio

SSD TRIM/discard on Fedora 17 with encypted partitions

I have not blogged for a while, now that I am on summer holiday and got a new laptop I finally have something to blog about. I got a Thinkpad T430 and installed a Samsung SSD 830 myself. The 830 is not actually the best choice for a Linux user because you can only download firmware updates with a Windows tool. The tool does let you make a bootable FreeDOS USB disk with which you can apply the update, so you can use a Windows system to download the update and apply it just fine on a Linux system. The reason I got this SSD is that it is 7 mm in height and fits into the T430 without removing any spacers.

I installed Fedora 17 on the laptop and selected drive encryption in the Anaconda installer. I used ext4 and did not use LVM, I do not think it would be of much use on a laptop. After the installation I discovered that Fedora 17 does not enable SSD TRIM/discard automatically. That is probably a good default, apparently all SSDs do not support it. When you have ext4 partitions encrypted with LUKS as Anaconda does it, you need to change two files and regenerate your initramfs to enable TRIM.

First, edit your /etc/fstab and add discard to each ext4 mount. Here is an example of my root mount:
/dev/mapper/luks-secret-id-here / ext4 defaults,discard 1 1

Second, edit your /etc/crypttab and add allow-discards to each line to allow the dmcrypt layer to pass TRIM requests to the disk. Here is an example:
luks-secret-id-here UUID=uuid-here none allow-discards

You need at least dracut-018-78.git20120622.fc17 for this to work, which you should already have on an up-to-date Fedora 17.

Third, regenerate your initramfs by doing dracut -f. You may want to take a backup of the old initramfs file in /boot but then again, real hackers do not make backups ;) .

Fourth, reboot and check with cryptsetup status luks-secret-id-here and mount that your file systems actually use discard now.

Please note that apparently enabling TRIM on encrypted file systems may reveal unencrypted data.

by Ville-Pekka Vainio at July 04, 2012 06:14 PM

April 29, 2012

Miia Ranta

Viglen MPC-L from Xubuntu 10.04 LTS to Debian stable

With Ubuntu not supplying a kernel suitable for the CPU (a Geode GX2 by National Semiconductors, a 486 buzzing at 399MHz clock rate) of my Viglen MPC-L (the one Duncan documented the installation of Xubuntu in 2010), it was time to look for other alternatives. I wasn’t too keen on the idea of using some random repository to get the suitable kernel for newer version of Ubuntu, so Debian was the next best thing that came to mind.

Friday night, right before heading out to pub with friends, I sat on the couch, armed with a laptop, USB keyboard, RGB cable and a USB memory stick. Trial and error reminded me to

  1. use bittorrent to download the image since our flaky Belkin-powered Wifi cuts off the connection every few minutes and thus corrupts direct downloads, and
  2. do the boot script magic of pnpbios=off noapic acpi=off like with our earlier Xubuntu installation.

In contrast to the experience of installing Xubuntu on the Viglen MPC-L, the Debian installation was easy from here on. The installer seemed to not only detect the needed kernel and install the correct one (Linux wizzle 2.6.32-5-486 #1 Mon Mar 26 04:36:28 UTC 2012 i586 GNU/Linux) but, judging from the success of the first reboot after the installation had finished and a quick look at /boot/grub/grub.cfg, had also set the right boot options automatically. So the basic setup was a *lot* easier than it was with Xubuntu!

Some things that I’ve gotten used to being automatically installed with Ubuntu weren’t pre-installed with Debian and so I had to install them for my usage. Tasksel installed ssh server, but rsync, lshw and ntfs-3g needed to be installed as well which I had gotten used to having in Ubuntu, but installing them wasn’t too much of a chore. As I use my Viglen MPC-L as my main irssi shell nowadays, I had to install of course irssi, but some other stuff needed by it and my other usage patterns… so… after installing apt-file pastebinit zsh fail2ban for my pet peeves, and tmux irssi irssi-scripts libcrypt-blowfish-perl libcrypt-dh-perl libcrypt-openssl-bignum-perl libdbi-perl sqlite3 libdbd-sqlite3-perl I finally have approximately the system I needed.

All in all, the experience was a lot easier than what I had with Xubuntu in September 2010. It definitely surprised me and I kind of hope that this process wasn’t as easy and automated 18 months ago…

by myrtti at April 29, 2012 10:00 PM

January 27, 2012

Aapo Rantalainen

Nokia Lumia 800 for Linux-developer

I got my Nokia Lumia 800 (Windows 7 -phone) from Nokia, and I consider myself as Linux-developer.

I attached Lumia phone to my computer and nothing happened. Went to discussion forum and learned there are no way to access phone via Linux. End of story (that was not long story).

by Aapo Rantalainen at January 27, 2012 07:18 PM

January 24, 2012

Sakari Bergen

WhiteSpace faces in Emacs 23

This is a good old case of RTFM, but since I spent a couple of hours figuring it out, I thought I’d blog about it anyway…

The WhiteSpace package in Emacs allows you to visualize whitespace in your code. The overall settings of the package are controlled with the ‘whitespace-style’ variable. Before Emacs 23 you didn’t need to include the ‘face’ option to make different faces work. However, since Emacs 23 you need to have it set.

Now I can keep obsessing about whitespace with an up-to-date version of Emacs, and maybe publicly posting stuff like this will help me remember to RTFM in the future also :)

by sbergen at January 24, 2012 05:35 PM

January 09, 2012

Sakari Bergen

Multiuser screen made easy

The idea for this all started with someone mentioning

it’d be good if there was some magic thing which did some SSH voodoo to get you a shell that the person on the other end could watch

So, I took a quick look around and noticed that Screen can already do multiuser sessions, which do exactly this. However, controlling the session requires writing commands to screen, which is both relatively complex for beginners and relatively slow if the remote user is typing in ‘rm -Rf *’ ;)

So, I created a wizard-like python script, which sets up a multiuser screen session and a simple one button GUI (using PyGTK) for allowing and disallowing the remote user access to the session. It also optionally creates a script which makes it easier for the remote user to attach to the session.


Known issues:

  • The helper script creation process for the remote user does not check the user input and runs sudo. Even though the script warns the user, it’s still a potential security risk
  • If the script is terminated unexpectedly, the screen session will stay alive, and will need to be closed manually before this script will work again

Resolving the issues?

Fixing the security issue would be just a matter of more work. However, the lingering screens are a whole different problem: I tried to find out a way to get the pid for the screen session, but failed to find a way to do this in python. This would have made the lingering screen sessions less harmful, as all the communication could have been done with <pid>.<session> instead of simply <session>, which it uses now. The subprocess.Popen object contains the pid of the launched process, but the actual screen session is a child of this process, and thus has a different pid. If anyone can point me toward a solution to this, it’d be greatly appreciated!

by sbergen at January 09, 2012 07:55 PM

January 03, 2012

Sakari Bergen

New site up!

I finally got the work done, and here’s the result! I moved from Dupal to WordPress, as it feels better for my needs. So far I’ve enjoyed it more than Drupal.

I didn’t keep all of the content from my old site: I recreated most of it and added some new content. I also went through links to my site with Google’s Webmaster Tools, and added redirects to urls which are linked to from other sites (and resurrected one blog post).

It’s been a while since I did any PHP, HTML or CSS. I almost got frustrated for a moment, but after reading this article, things progressed much easier. Thanks to the author, Andrew Tetlaw! I was also inspired by David Robillard’s site, which is mostly based on the Barthelme theme. However, I started out with Automattic’s Toolbox theme, customizing most of it.

If you find something that looks or feels strange, please comment!

by sbergen at January 03, 2012 08:57 PM

December 28, 2011

Aapo Rantalainen

Joulun hyväntekeväisyyslahjoituskohteet

Jouluhan on hyvää aikaa lahjoittaa rahaa hyväntekeväisyyteen, eikös juu. Tässä pari vinkkiä niille jotka haluavat helposti PayPalilla osallistua hyvän tekoon.


Kukapa ei tietäisi Wikipediaa, mutta tietääkö kaikki, että sen takana on aika pieni säätiö. Esim Googlella on noin miljoona palvelinkonetta, Wikipedialla 679. Esim Yahoolla työskentelee 13000 työntekijää, Wikipedialla 95.



Free Software Foundation

Säätiön nimessä oleva ‘free’ ei tarkoita ilmaista, vaan ‘vapautta’. Ohjelmiston vapaus tarkoittaa:

-lupa käyttää sitä miten tahansa
-lupa tutkia miten se toimii ja kuinka se on tehty
-lupa muuttaa sen toimintaa (eli korjata tai parantaa sitä)
-lupa kopioida sitä toisille, muutettuna tai muuttamattomana

Free Software on aate, joka kehottaa Sinuakin miettimään: “kuvittele maailma, jossa kaikki ohjelmistot ovat vapaita.” Ovatko Sinun käyttämäsi ohjelmat vapaita?

by Aapo Rantalainen at December 28, 2011 12:59 PM

December 08, 2011

Mikko Rauhala (mjr)

Games Workshop forces takedown of tank model

You know you're living in the future when Games Workshop pulls a DMCA takedown on a tank model lest people print their own toys instead of buying them at a premium.

Now, this is probably a pretty cut-and-dry case even from a trademark perspective - the file is even named Warhammer 40k Imperial Guard Leman Russ Tank. You don't need to support copyrights to think that's a bit dubious, since it's not, in fact, an official Warhammer 40k model. On the other hand, people are rather unlikely to actually get the wrong impression in that regard, and neither is it a deliberate attempt to fool people, since the creator readily acknowledges the unofficial and unlicensed nature of the work in the description.

On the third hand, what if it was just named a "Tank model compatible with Warhammer 40k (TM)" or something? It is reportedly a "pretty good likeness" of the Warhammer tank, but "not verbatim". Given that copyright monopolies exist, lines will have to be drawn in the sand, and there will be a lot of effort to push that line by large corporations, wielding copyrights as a huge blunt instrument with chilling effects. Thingiverse and similar sites operating under US law practically have to take down any work that is claimed to violate copyrights or face liability, at least until a counterclaim is made by the submitter.

In Finland, I'm not even sure if anything would keep the site safe from liability - around here, if you run a service that people violate copyrights on and get caught, you can pretty much expect to get screwed by the so-called justice system to the point that it's just best to become a welfare leech for the rest of your life since you're not gonna be able to pay off the copyright mafia anyway. But I digress.

Of course, there's safer havens for these kinds of sites to operate still. But apparently, the war between corporations and home manufacturers is on.

by Mikko Rauhala ( at December 08, 2011 12:36 PM

December 05, 2011

Aapo Rantalainen

MeeGo on ExoPC

Even Ubuntu runs very well on ExoPC (last post), I had promised to return it with MeeGo, so here we go…

Downalod Latest image (meego-tablet-ia32-pinetrail- from

Copy to usb-stick and booting ExoPC from stick. Yes,ok,ok,ok,ok and ok. Boot. Ready.

I wanted some challenge, so I decided to compile and run JamMo. Easy as with Ubuntu (upgraded manual). Game uses fixed size window 800×480, so it would be handy to change resolution of ExoPC. Xrandr left black borders to left and right, but touchscreen is still using whole screen (so elements on middle of the screen are accessible normally, but elements near left and right borders are not).

Solution (partial): Add new screen-mode and use it.


cvt 840 480

And it gives: “840x480_60.00″   31.25  840 864 944 1048  480 483 493 500 -hsync +vsync

Run (add these to the autorun, they are cleared at every boot):

xrandr --newmode  "840x480_60.00"   31.25  840 864 944 1048  480 483 493 500 -hsync +vsync
xrandr --addmode LVDS1 840x480_60.00

And when you want use that resolution, run

xrandr --output LVDS1 --mode 840x480_60.00

There are still little black, but doesn’t affect usage (width must be multiple of 8, you can test if 848 is better…)

*Task switcher is still in old middle of the screen
*Coming back might cause half of the screen be black (this is corrected after screen dim)
*Browser might rotate itself to portrait mode (even it is first started landscaped)

by Aapo Rantalainen at December 05, 2011 01:14 PM

December 01, 2011

Aapo Rantalainen

Ubuntu on ExoPC

I got Intel’s ExoPC on my hands and tested Ubuntu on it.

ExoPC is laptop with touchscreen and without keyboard, somebody would call it ‘tablet’. It is not ARM, but 32/64 bit x86 (Atom). It has 2GB memory and 64GB SSD ‘hard disk’.

When I got it, tt has (old) MeeGo  (, which went broke when I tried upgrade. Because it is ~standard PC there are many Linux for it (e.g Suse: )

I installed Ubuntu 10.04.3 (latest long-term-supported Ubuntu). Touchscreen was not working, so I upgraded it three times: -> 10.10 -> 11.04 -> 11.10. I used USB-keyboard (and also ssh-server), but it seems there are no any default virtual keyboard. It is very cool device with lots of potential.

Hardware buttons:
System Settings -> Keyboard -> Shortcuts
(Or in Gnome: System->Preferences->Keyboard Shortcuts)

There are one button back of the device, power button. It is recognized as ‘PowerOff’
There are one button (proximity sensor?) front of the device, ‘orange magic circle’. It is recognized as ‘Audio media’ (XF86Media or XF86AudioMedia depending on Ubuntu version)

Some critics about software:
Multitouch is not working (at least out-of-the-box). I have no time to investigate this more.

Screen dims when on battery, even asked to not dim:

Touchscreen will not be recognized after after some time of use

Mute/unmute microphone via command line doesn’t work (even toggle is working):

Any use for tablets? Maybe musical game for children? How about JamMo?

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="390" src=";rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="640"></iframe>


by Aapo Rantalainen at December 01, 2011 06:47 PM

November 06, 2011

Miia Ranta

Ubuntu 11.10 on an ExoPC/Wetab, or how I found some use for my tablet and learnt to hate on-screen keyboards

I attended an event in the spring that ended with a miraculous incident of being given an ExoPC to use. The operating system that it came installed with was a bit painful to use (and I’m not talking about a Microsoft product), so I didn’t find too much use for the device. I flashed it with a new operating system image quite often, only to note that none to few problems were ever fixed in the UI. Since operating system project is pretty much dead now with participants moving to new areas and projects of interest, I decided to bite the bullet and flash my device with the newest Ubuntu.

Installation project requires an USB memory stick made into an installation media with the tools shipped with regular Ubuntu. A keyboard is also nice to have to make installation process feasible in the first place, or at least it makes it much less painful experience. After the system is installed, comes the pain of getting the hardware to play nice. Surprisingly I’ve had no other problems than trying to figure out how to make the device and operating system to realise that I want to scroll or right-click with my fingers instead of a mouse. Almost all the previous instructions I’ve come across involve (at best) Ubuntu 11.04 and a 2.6.x kernel – and the rest fail to give a detailed instruction on how to make the scrolling or right-clicking work with evdev. The whole process is very frustrating, and I still haven’t figured everything out.

Anyway. First thing you notice, especially without the fingerscrolling working, is that the new scrollbars are a royal pain in the hiney. The problem isn’t as bad in places where the problem can be bypassed, like in Chromium with the help of an extension called chromeTouch where the fingerscrolling can be set to work, or in Gnome-shell which actually has a decent sized scrollbar, or uninstalling overlay-scrollbar altogether, which isn’t pretty, but it works.

Exopc The second immediate thing that slaps a cold wet towel on the face is – after you’ve unplugged the USB keyboard – is the virtual keyboards. Ubuntu and its default environment Unity use OnBoard as the default on-screen keyboard. OnBoard is a complete keyboard with (almost) all the keys a normal keyboard would have, but it lacks a few features that are needed on a tablet computer: it lacks automation of hiding and unhiding itself. In addition to this annoyance OnBoard had the tendency of swapping the keyboard layout to what I assume to be either US or British instead of the Finnish one I had set as default on the installation. One huge problem with OnBoard is at least in my use that it ends up being underneath the Unity interface, where it’s next to useless.

I tried to install other virtual keyboards, like Maliit and Florence, but instructions and packages on Oneiric are lacking and anyway, I still don’t know how to change the virtual keyboard from OnBoard to something else. However, the virtual keyboard in a normal Gnome 3 session with Gnome-Shell seems to work more like the virtual keyboards should, but alas, it doesn’t seem to recognize the keyboard layout settings at all and thus I’m stuck to non-Finnish keyboard layout.

However among all these problems Ubuntu 11.10 manages to show great potential with both Unity and Gnome 3. Ubuntu messaging menu is nice, once gmnotify has been installed (as I use Chromium application Offline Gmail as my email client), empathy set up, music application of choice filled with music and browser settings synchronized.

I’ve found that the webcam works perfectly and the video call quality is much better than it has been earlier on my laptop where I’ve resorted into using GMails video call feature, because it Just Works. It’s nice to see that pulseaudio delivers and bluetooth audio works 100% with both empathy video calls and stereo music/video content.

Having read of the plans for future Ubuntu releases from blogposts of people who were attending UDS-P in Orlando this past week, I openly welcome our future tablet overlords. Ubuntu on tablets needs love and it’s nice to know it’s coming up. This all bodes well for my plan to take over the world with Ubuntu tablet, screen, emacs and chromium :-)

by myrtti at November 06, 2011 12:06 AM

October 29, 2011

Ville-Pekka Vainio

Getting Hauppauge WinTV-Nova-TD-500 working with VDR 1.6.0 and Fedora 16

The Hauppauge WinTV-Nova-TD-500 is a nice dual tuner DVB-T PCI card (well, actually it’s a PCI-USB thing and the system sees it as a USB device). It works out-of-the-box with the upcoming Fedora 16. It needs a firmware, but that’s available by default in the linux-firmware package.

However, when using the Nova-TD-500 with VDR a couple of settings need to be tweaked or the signal will eventually disappear for some reason. The logs (typically /var/log/messages in Fedora) will have something like this in them:
vdr: [pidnumber] PES packet shortened to n bytes (expected: m bytes)
Maybe the drivers or the firmware have a bug which is only triggered by VDR. This problem can be fixed by tweaking VDR’s EPG scanning settings. I’ll post the settings here in case someone is experiencing the same problems. These go into /etc/vdr/setup.conf in Fedora:

EPGBugfixLevel = 0
EPGLinger = 0
EPGScanTimeout = 0

It is my understanding that these settings will disable all EPG scanning which is done in the background and VDR will only scan the EPGs of the channels on the transmitters it is currently tuned to. In Finland, most of the interesting free-to-air channels are on two transmitters and the Nova-TD-500 has two tuners, so in practice this should not cause much problems with outdated EPG data.

by Ville-Pekka Vainio at October 29, 2011 06:07 PM

August 25, 2011

Miia Ranta

Things I learnt about managing people while being a Wikipedia admin

Colour explosion Just over four years ago I gave up my volunteer, unpaid role as an administrator of the Finnish Wikipedia. Today, while discussing with a friend, I realised what has been one of the most valuable lessons in both my professional life and hobbies. While I am quite pessimistic in general, I still benefit from these little nuggets of positive insight almost every day when communicating and working with other people.

  • Assume Good Faith. “Unless there is clear evidence to the contrary, assume that people who work on the project are trying to help it, not hurt it.” Most people aren’t your enemies. Most people will not try to hurt you. If stupidity is abound, it’s (usually) not meant as a personal attack towards you, nor is it intentional.
  • When someone does something that doesn’t immediately make sense, which contradicts your assumptions about the skills and common sense of a person you are dealing with, discuss it with them! Don’t make assumptions based on partial information or details, ask for more info so you don’t need to assume the worst! If something is unclear, asking won’t make things worse.

Pessimists are never disappointed, only positively surprised. But while the world seems like a dark a desolate place and the humanity seems to be doomed, I still have to try to believe in the sensibility of people and that we can make something special for the project we are trying to work for. Ubuntu, Wikipedia, Life… or just your day-to-day job.

by myrtti at August 25, 2011 11:49 PM

August 21, 2011

Miia Ranta

And then, unexpectedly, life happens

I hope none of you have expected me to blog more often. It’s been over a year since I’ve last blogged, and so much has happened since I last did.

I’ve travelled to Cornwall, started a Facebook page that got a huge following in no time, fiddled a bit with CMS Made Simple at work, bought another Nexus One to replace one that broke and after getting the broke one fixed, gave the extra to my sister as a Christmas present, have taught Duncan how to make gravadlax and crimp Carelian pasties, visited Berlin and bought a game. I’ve attended a few geeky events, like Local MeeGo Network meetings of Tampere, Finland, MeeGo Summit also in Tampere, MeeGo Conference in San Francisco, US and OggCamp’11 in Farnham, UK.

I’ve also taken few steps in learning to code in QML, poked around Arduino and bought a new camera, Olympus Pen E-PL1.

My mom What else has happened? Well, among other things, my mother was diagnosed with cholangiocarcinoma right after New Year, and she passed away 30th of June.

Many things that I have taken for granted have changed or gone away forever. Importance of some things have changed as my life is trying to find a new path to run in.

Blogging and some of my Open Source related activities have taken a toll, which I am planning to fix now that I feel like I’m strong enough to use my energy on these hobbies again. Sorry for the hiatus, folks.

Coming up, perhaps in the near future:

  • Rants and Raves about Arduino
  • Entries about social networking sites
  • Camera/Photography jabber
  • Mobile phone/Tablet chatter

So, just so you know, I’m alive, and will soon be in an RSS feed reader near you. AGAIN.

by myrtti at August 21, 2011 12:20 AM

August 06, 2011

Ville-Pekka Vainio

The Linux/FLOSS Booth at Assembly Summer 2011

The Assembly Summer 2011 demo party / computer festival is happening this weekend in Helsinki, Finland. The Linux/FLOSS booth here is organized together by Finnish Linux User Group, Ubuntu Finland, MeeGo Network Finland and, of course, Fedora. I’m here representing Fedora as a Fedora Ambassador and handing out Fedora DVDs. Here are a couple of pictures of the booth.

The booth is mostly Ubuntu-coloured because most of the people here are members of Ubuntu Finland and Ubuntu in general has a large community in Finland. In addition to live CDs/DVDs, the MeeGo people also brought two tablets running MeeGo (I think they are both ExoPCs) and a few Nokia N950s. They are also handing out MeeGo t-shirts.

People seem to like the new multi-desktop, multi-architecture live DVDs that the European Ambassadors have produced. I think they are a great idea and worth the extra cost compared to the traditional live CDs.

by Ville-Pekka Vainio at August 06, 2011 11:11 AM

April 29, 2011

Sakari Bergen

Website remake coming up, comments disabled

The format of my current website has not worked very well for me, and I'm a bit lazy with bloggy stuff. So, I decided to remake this website. I've already made a new design, and will probably be switching to Wordpress from Drupal because it's a bit simpler. Hope to get the new site up in a few months latest!

Due to a lot of spam recently, I disabled comments.

by sbergen at April 29, 2011 09:53 AM

March 21, 2011

Jouni Roivas


Recently Wayland have become a hot topic. Canonical has announced that Ubuntu will go to Wayland. Also MeeGo has great interest on it.

Qt has had (experimental) Wayland client support for some time now.

A very new thing is support for Qt as Wayland server. With that one can easily make own Qt based Wayland compositor. This is huge. Since this the only working Wayland compositor has been under wayland-demos. Using Qt for this opens many opportunities.

My vision is that Wayland is the future. And the future might be there sooner than you think...

by Jouni Roivas ( at March 21, 2011 09:42 AM

February 24, 2011

Jouni Roivas

January 03, 2011

Ville-Pekka Vainio

Running Linux on a Lenovo Ideapad S12, part 2

Here’s the first post of what seems to be a series of posts now.


I wrote about acer-wmi being loaded on this netbook to the kernel’s platform-driver-x86 mailing list. That resulted in Chun-Yi Lee writing a patch which adds the S12 to the acer-wmi blacklist. Here’s the bug report.


I did a bit of googling on the ideapad-laptop module and noticed that Ike Panhc had written a series of patches which enable a few more of the Fn keys on the S12. The git repository for those patches is here. Those patches are also in linux-next already.

So, I cloned Linus’ master git tree, applied the acer-wmi patch and then git pulled Ike’s repo. Then I followed these instructions, expect that now Fedora’s sources are in git, so you need to do something like fedpkg co kernel;cd kernel;fedpkg prep and then find the suitable config file for you. Now I have a kernel which works pretty well on this system, except for the scheduling/sleep issue mentioned in the previous post.

by Ville-Pekka Vainio at January 03, 2011 10:19 AM

December 27, 2010

Ville-Pekka Vainio

Running Linux (Fedora) on a Lenovo Ideapad S12

I got a Lenovo Ideapad S12 netbook (the version which has Intel’s CPU and GPU) a few months ago. It requires a couple of quirks to work with Linux, I’ll write about them here, in case they’ll be useful to someone else as well.


The netbook has a “Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01)” wifi chip. It works with the “b43″ open source driver, which is in the kernel. However, I think that it may not actually reach the speeds it should. You could also use the proprietary “wl” kernel module, available in RPM Fusion as “kmod-wl”, but I don’t like to use closed source drivers myself.

The b43 driver needs the proprietary firmware from Broadcom to work with the 4312 chip. Following these instructions should get you the firmware.


The kernel needs the “nolapic_timer” parameter to work well with the netbook. If that parameter is not used, it seems like the netbook will easily sleep a bit too deep. Initially people thought that the problem was in the “intel_idle” driver, the whole thing is discussed in this bug report. However, according to my testing, the problem with intel_idle was fixed, but the netbook still has problems, they are just a bit more subtle. The netbook boots fine, but when playing music, the system will easily start playing the same sample over and over again, if the keyboard or the mouse are not being used for a while. Apparently the system enters some sort of sleeping state. I built a vanilla kernel without intel_idle and I’m seeing this problem with it as well.

Then there’s “acer-wmi”. The module gets loaded by the kernel and in older versions it was probably somewhat necessary, because it handled the wifi/bluetooth hardware killswitch. It causes problems with NetworkManager, though. It disables the wifi chip on boot and you have to enable wifi from the NetworkManager applet by hand. Here’s my bug report, which hasn’t gotten any attention, but then again, I may have filed it under the wrong component. Anyway, in the 2.6.37 series of kernels there is the “ideapad_laptop” module, which apparently handles the hardware killswitch, so acer-wmi shouldn’t be needed any more and can be blacklisted.

by Ville-Pekka Vainio at December 27, 2010 03:18 PM

December 12, 2010

Riku Leino

Scribus 1.3.9 on julkaistu

Scribuksen kehitysversio on päässyt versionumeroon 1.3.9. Tästä versiosta tekee merkittävän, että se on viimeinen kehitysversio ennen uuden vakaan kehityshaaran avaamista. Seuraava vakaa versio tulee suurella todennäköisyydellä olemaan 1.4.0.

Kehityksen pääpainona oli vakaus. Versiosta 1.3.8 on korjattu 70 bugia. Joukossa on myös muutama uusi ominaisuus koskien:

  • käännösympäristöä
  • leikekirjaa
  • resursseja ja resurssien hallintaa
  • tiedostojen tuontia
  • typografiaa
  • dokumentaatiota

by Tsoots at December 12, 2010 03:52 PM