# blogit.vapaasuomi.fi

## March 14, 2015

### Viikon VALO

#### 5x11 Google Fonts - Viikon VALO #219

Google Fonts on kokoelma vapaita kirjasintyyppejä monenlaiseen käyttöön.
Google Fonts on palvelu, jonka kautta Google tarjoaa käytettäväksi useita satoja vapaita kirjasintyyppejä, eli fontteja, esimerkiksi käytettäväksi www-sivujen ulkoasussa. Palvelu antaa käyttöliittymän kirjasintyyppien valintaan kategorian, tyylin ja käytettyjen merkistöjen perusteella. Valittavia fontteja voi helposti selata ja testata erilaisilla esimerkkiteksteillä. Jos fontteja haluaa käyttää oman www-sivustonsa tyylissä, saa palvelusta helpon sivustolle lisättävän kooditekstin, jonka sivustolleen lisäämällä fontit ladataan suoraan Googlen palvelusta sivuja näytettäessä. Tiedostot voi kuitenkin ladata myös itselleen, jos haluaa käyttää fonttitiedostoja suoraan omalta sivustolta, linkittämättä Googlen palveluun, tai haluaa käyttää kirjasimia jossain muussa käytössä, kuten tekstiasiakirjoissa tai kuvankäsittelyssä. Kirjasintiedostot voi ladata suoraan Googlen palvelusta tai fontit sisältävästä GitHub-repositorysta. Kirjasintyyppit ovat True Type Font -muotoisia (.ttf).

Google Fonts -palvelun jakamat kirjasintyypit ovat kaikki vapaalla lisenssillä julkaistuja. Suurin osa fonttiperheistä on lisensoitu SIL Open Font Licensen versiolla 1.1, osan lisenssinä on Apache 2 ja Ubuntu-fonttien lisenssi on Ubuntu Font License. Lisenssit antavat luvan käyttää, muokata ja jakaa edelleen fontteja. Lisenssien mukaan fonteista johdettujen uusien kirjasintyyppien jakelussa on käytettävää samaa lisenssiä. Tämä lisenssivaatimus ei tietenkään koske varsinaisia töitä, kuten kuvia ja dokumentteja, joissa on vain käytetty kyseisiä fontteja.

GitHub-repositoryssa tiedostot onkin jaoteltu juuri lisenssin mukaan. Repositoryssa erilaisia kirjasintyyppejä on noin 870 kappaletta, joten niiden joukosta löytyy varmasti käyttökelpoisia kirjasimia moneen käyttöön. Niiden selaaminen on ehkä helpointa Googlen palvelun avulla.

Kotisivu
Lisenssi
Toimii seuraavilla alustoilla
Kaikki alustat

Teksti: Pesasa
Kuvakaappaukset: Pesasa

## March 09, 2015

### Niklas Laxström

#### IWCLUL 3/3: conversations and ideas

In IWCLUL talks, Miikka Silfverberg’s mention of collecting words from Wikipedia resonated with my earlier experiences working with Wikipedia dumps, especially the difficulty of it. I talked with some people at the conference and everyone seemed to agree that processing Wikipedia dumps takes a lot of time, which they could spend for something else. I am considering to publish plain text Wikipedia dumps and word frequency lists. While working in the DigiSami project, I familiarized myself with the utilities as well as the Wikimedia Tool Labs, so relatively little effort would be needed. The research value would be low, but it would be worth it, if enough people find these dumps and save time. A recent update is that Parsoid is planning to provide plain text format, so this is likely to become even easier in the future. Still, there might be some work to do collect pages into one archive and decide which parts of page will stay and which will be removed: for example converting an infobox to collection of isolated words is not useful for use cases such as WikiTalk, and it can also easily skew word frequencies.

I talked with Sjur Moshagen about keyboards for less resourced languages. Nowadays they have keyboards for Android and iOS, in addition to keyboards for computers (which already existed). They have some impressing additional features, like automatically adding missing accents to typed words. That would be too complicated to implement in jquery.ime, a project used by Wikimedia that implements keyboards in a browser. At least the aforementioned example uses finite state transducer. Running finite state tools in the browser does not yet feel realistic, even though some solutions exist*. The alternative of making requests to a remote service would slow down typing, except perhaps with some very clever implementation, which would probably be fragile at best. I have still to investigate whether there is some middle ground to bring the basic keyboard implementations to jquery.ime.

*Such as jsfst. One issue is that the implementations and the transducers themselves can take lot of space, which means we will run into same issues as when distributing large web fonts at Wikipedia.

I spoke with Tommi Pirinen and Antti Kanner about implementing a dictionary application programming interface (API) for the Bank of Finnish Terminology in Arts and Sciences (BFT). That would allow direct use of BFT resources in translation tools like translatewiki.net and Wikimedia’s Content Translation project. It would also help indirectly, by using a dump for extending word lists in the Apertium machine translation software.

I spoke briefly about language identification with Tommi Jauhiainen who had a poster presentation about the project “The Finno-Ugric languages and the internet”. I had implemented one language detector myself, using an existing library. Curiously enough, many other people met in Wikimedia circles have also made their own implementations. Mine had severe problems classifying languages which are very close to each other. Tommi gave me a link for another language detector, which I would like to test in the future to compare its performance with previous attempts. We also talked about something I call “continuous” language identification, where the detector would detect parts of running text which are in a different language. A normal language detector will be useful for my open source translation memory service project, called InTense. Continuous language identification could be used to post-process Wikipedia articles and tag foreign text so that correct fonts are applied, and possibly also in WikiTalk-like applications, to provide the text-to-speech (TTS) with a hint on how to pronounce those words.

Reasonator is a software that generates visually pleasing summary pages in natural language and structured sections, based on structured data. More specifically, it uses Wikidata, which is the Wikimedia structured data project, developed by Wikimedia Germany. Reasonator works primarily for persons, though other types or subjects are being developed. Its localisation is limited, compared to the about three hundred languages of MediaWiki. Translating software which generates natural language sentences dynamically is very different from the usual software translation, which consists mostly of fixed strings with occasional placeholder which is replaced dynamically when showing text to an user.

It is not a new idea to use grammatical framework (GF), which is a language translation software based on interlingua, for Reasonator. In fact I had proposed this earlier in private discussions to Gerard Meijssen, but this conference renewed my interest in the idea, as I attended the GF workshop held by Aarne Ranta, Inari Listenmaa and Francis Tyers. GF seems to be a good fit here, as it allows limited context and limited vocabulary translation to many languages simultaneously; vice versa, Wikidata will contain information like gender of people, which can be fed to GF to get proper grammar in the generated translations. It would be very interesting to have a prototype of a Reasonator-like software using GF as the backend. The downside of GF is that (I assume) it is not easy for our regular translators to work with, so work is needed to make it easier and more accessible. The hypothesis is that with GF backend we would get a better language support (as in grammatically correct and flexible) with less effort on the long run. That would mean providing access to all the Wikidata topics even in smaller languages, without the effort of manually writing articles.

## March 07, 2015

### Viikon VALO

#### 5x10 Pandoc - Viikon VALO #218

Pandoc on komentorivityökalu tekstipohjaisten tiedostojen muuntamiseen muodosta toiseen.
Pandoc on todellinen monitoimityökalu, kun on tarve muuntaa yhdellä merkintäkielellä kirjoitettua tekstimateriaalia toiseen merkintäkieleen. Pandoc osaa lukea monessa muodossa kirjoitettuja tiedostoja ja tallentaa luetun tekstin vielä useammalla muotoilukielellä. Pandocilla on helppoa vaikka automatisoida useiden tiedostojen muuntaminen muodosta toiseen. Toki tällaisia muunnoksia tehdessä Pandocin tuottamia tiedostoja voi joutua vielä muokkaamaan ja viimeistelemään käsin, mutta suurin osa rutiinityöstä hoituu Pandocilla.

Kieliä ja tiedostomuotoja, joita Pandoc osaa lukea, ovat muun muassa: HTML, Markdown, LaTeX, MediaWiki ja textile. Pandocin tuottamia kieliä ja tiedostomuotoja on jo mainittujen lisäksi muun muassa Microsoftin Word-ohjelman DOCX-tiedostomuoto, OpenOffice- ja LibreOffice-ohjelmistojen käyttämä ODT, sähkökirjojen tiedostomuoto EPUB sekä PDF-tiedostot. Pandoc osaa tuottaa myös muutamien HTML- ja LaTeX-pohjaisten esitystiedostojen mukaista tekstiä. Näitä ovat esimerkiksi Slidy, reveal.js, Slidous, S5 ja DZSlides sekä Beamer. Tarkempi luettelo tuetuista muodoista löytyy ohjelman kotisivuilta tai sen mukana tulevasta dokumentaatiosta.

Markdown-kielestä Pandoc osaa useampia variaatiota ja laajennuksia, mukaan lukien GitHubin laajennettu syntaksi. Näiden ansiosta Markdownilla kirjoitetut tekstit voivat sisältää muun muassa taulukoita, alaviitteitä, koodilohkoja, automaattisia sisällysluetteloita, upotettua LaTeX-tyylistä matematiikkaa sekä HTML-koodin sisään kirjoitettua Markdown-syntaksia. Markdown-tekstiä HTML-tiedostoksi käännettäessä on mahdollista valita käytettäväksi vaikka ohjelmakoodin syntaksin korostus väreillä.

Pandoc on toteutettu siten, että kukin ohjelmaan sisään luettu tiedosto käännetään ohjelman sisäiseen muotoon, josta puolestaan muodostetaan halutun kohdekielen mukainen tuloste. Näin ohjelmasta on saatu modulaarinen ja uuden tuetun kielen lisääminen on joustavaa. Ohjelma tukee myös esimerkiksi LaTeX-muotoisten matemaattisten kaavojen esittämistä HTML-muodossa useammallakin tavalla, mukaan lukien MathJax ja MathML.

Pandocin käytön aloittaminen on varsin helppoa. Yksinkertaisimmillaan Pandoc tunnistaa syötetiedostossa käytetyn muodon ja tallentaa -o -valitsimella määrätyn tulostiedoston päätteensä mukaisessa muodossa. Esimerkiksi:

 pandoc -o testi.pdf testi.html

Syöte- ja tulostiedostojen muodon voi kuitenkin kertoa Pandocille käyttäen -f -valitsinta (from) ja -t -valitsinta (to). (Vaihtoehtoisesti -r (read) ja -w (write)).
 pandoc -f markdown -t html -o index.html index.md

Pandocista löytyy lisäksi suuri määrä muita valitsimia, joilla sen toimintaa voi muokata haluamakseen. Esimerkiksi --template=TIEDOSTO -valitsimella voi valita haluamansa dokumenttipohjan luotavan tiedoston malliksi. Muita esimerkkejä valitsimista ovat sisällysluettelon generointi, rivinumeroiden ja syntaksikorostuksen lisääminen ohjelmakoodiin ja matematiikkapaketin valinta HTML-tulostukseen.

Kotisivu
http://johnmacfarlane.net/pandoc/
Lähdekoodi
https://github.com/jgm/pandoc
Lisenssi
GNU GPL
Toimii seuraavilla alustoilla
Linux, Windows, Mac OS X, FreeBSD, NetBSD, OpenBSD
Asennus
Ohjelma on ladattavissa sen kotisivuilta. Linux-järjestelmiin se on yleensä asennettavissa paketinhallinnan kautta.

Teksti: Pesasa
Kuvakaappaukset: Pesasa

## March 04, 2015

### Aapo Rantalainen

Android debugging tool ADB has ‘new‘ feature called sideload, which is in use on Android devices, but not all versions of host toolset. Here comes instructions for compiling newest version from source.

## Context

Adb (Android Debug Bridge) is debugging tool to communicate with Android-device and host.

## Problem

Some people follow instructions and then encounters mystic error:

## Solution

#Needed tools
sudo apt-get install zlib1g-dev libssl-dev git make gcc</code>

#Download source code (this is currently the newest branch, in future it might still work without issues)

#Create Makefile
echo -e '
SRCS+= fdevent.c
SRCS+= commandline.c
SRCS+= console.c
SRCS+= file_sync_client.c
SRCS+= get_my_path_linux.c
SRCS+= services.c
SRCS+= sockets.c
SRCS+= transport.c
SRCS+= transport_local.c
SRCS+= transport_usb.c
SRCS+= usb_linux.c
SRCS+= usb_vendors.c

VPATH+= ../libcutils
SRCS+= socket_local_client.c
SRCS+= socket_local_server.c
SRCS+= socket_loopback_client.c
SRCS+= socket_loopback_server.c
SRCS+= socket_network_client.c

VPATH+= ../libzipfile
SRCS+= centraldir.c
SRCS+= zipfile.c

CPPFLAGS+= -DHAVE_FORKEXEC=1
CPPFLAGS+= -DHAVE_TERMIO_H
CPPFLAGS+= -DHAVE_SYS_SOCKET_H
CPPFLAGS+= -DHAVE_OFF64_T
CPPFLAGS+= -D_GNU_SOURCE
CPPFLAGS+= -D_XOPEN_SOURCE
CPPFLAGS+= -I.
CPPFLAGS+= -I../include

CC= gcc
LD= gcc

OBJS= $(SRCS:.c=.o) all: adb adb:$(OBJS)
\t$(LD) -o$@ $(LDFLAGS)$(OBJS) $(LIBS) clean: \trm -rf$(OBJS) adb
' > standalone_Makefile

#Compile
make -f standalone_Makefile

#Profit


Adb can be tested without device connected, and with dummy filename. Error message should be then:

* cannot read 'foobar' *


## Credits

Makefile starting point: http://android.serverbox.ch/?p=1217

commit 447f061da19fe46bae35f1cdd93eeb16bc225463
Author: Doug Zongker
Date: Mon Jan 9 14:54:53 2012 -0800</code>

Recovery will soon support a minimal implementation of adbd which will
install them. This is the client side command (mostly resurrected out
of the old circa-2007 "adb recover" command) and the new connection
state.

Change-Id: I4f67b63f1b3b38d28c285d1278d46782679762a2


But it is still not documented: https://code.google.com/p/android/issues/detail?id=158394

## February 28, 2015

### Viikon VALO

#### 5x09 Phatch - Viikon VALO #217

Phatch on työkalu, jolla voi tehdä joukon erilaisia muokkausoperaatioita suureenkin määrään kuvatiedostoja yhdellä kertaa.
Kun on tarve muuttaa yksittäisen kuvatiedoston kokoa, muokata sen värejä tai tehdä joitain muita operaatioita, on usein helppoa vain käynnistää GIMP, Krita, Pinta tai jokin muu kuvankäsittelyohjelma, tehdä vaaditut muutokset ja tallentaa muokattu kuva. Kun muokattavia kuvia onkin vaikka 217 kappaletta, on syytä keksiä jokin muu keino muokkausten tekemiseen. Yksi vaihtoehto on tehdä muokkaukset komentorivillä esimerkiksi ImageMagick-paketin ohjelmilla. Toinen keino on käyttää ikkunoidulla käyttöliittymällä toimivaa Phatch-ohjelmaa.

Phatch-nimi tulee yhdistelmänä sanoista PHoto ja bATCH, joilla viitataan valokuvien käsittelyyn eräajoina (aineiston käsittelyä suurehkoina erinä kerrallaan). Phatchin ideana on, että käyttäjä voi kerätä tehtävälistan ohjelman tarjoamista muokkausoperaatioista ja sen jälkeen määrätä nämä operaatiot tehtäviksi valituille tiedostoille. Phatchissa on valittavissa suuri joukko erilaisia operaatioita, kuten kuvan skaalaaminen, kontrastin ja kirkkauden säätö, kehysten luominen, tekstin tai pudotusvarjon lisääminen sekä useita muita muunnoksia. Operaatiot kerätään listaksi, järjestetään suoritettavaksi haluttuun järjestykseen ja niille määritellään halutut parametrit, kuten skaalauksen suuruus tai pudotusvarjon suunta ja etäisyys. Viimeiseksi operaatioksi laitetaan lisäksi tallennus, jonka parametreiksi laitetaan muun muassa tallennuskansio, tallennettavien tiedostojen nimen malli sekä tallennusmuoto. Luodun tehtävälistan voi tallentaa myöhempää käyttöä varten, jotta samat muokkaukset samoilla asetuksilla on mahdollista toistaa myöhemmin myös muille kuville.

Lopuksi ohjelman käsketään aloittaa tehtävien suorittaminen ja sille kerrotaan kuvatiedostot, joita sen tulee käsitellä. Muokattavat tiedostot voidaan antaa ohjelmalle joko kokonaisena kansiona tai valittuina tiedostoina. Vaihtoehtoisesti kuva voidaan ottaa myös leikepöydältä. Lisäksi tehtävien suoritus voidaan rajata vain tietyn tyyppisiin kuvatiedostoihin.

Phatch sisältää lisäksi työkalun, jolla voi tarkastella yksittäisen tiedoston tietoja raahaamalla kuvatiedoston tarkasteluikkunaan.

Kotisivu
http://photobatch.wikidot.com/
Lisenssi
GNU GPL v.3
Toimii seuraavilla alustoilla
Linux, Windows, Mac OS X
Asennus
Ohjelma on asennettavissa Linux-jakeluihin paketinhallinnan kautta. Muille alustoille sen voi ladata kotisivujen kautta.

Teksti: Pesasa
Kuvakaappaukset: Pesasa

## February 23, 2015

### Niklas Laxström

#### IWCLUL 2/3: morphology, OCR, a corpus vs. Wiktionary

More on IWCLUL: now on the sessions. The first session of the day was by the invited speaker Kimmo Koskenniemi. He is applying his two-level formalism in a new area, old literary Finnish (example of old literary Finnish). By using two-level rules for old written Finnish together with OMorFi, he is able to automatically convert old text to standard Finnish dictionary forms, which can be used, in the main example, as an input text to an search engine. He uses weighted transducers to rank the most likely equivalent modern day words. For example the contemporary spelling of wijsautta is viisautta, which is an inflected form of the noun viisaus (wisdom). He only takes the dictionary forms, because otherwise there are too many unrelated suggestions. This avoids the usual problems of too many unrelated morphological analyses: I had the same problen in my master’s thesis when I attempted using OMorFi to improve Wikimedia’s search system, which was still using Lucene at that time.

Jeremy Bradley gave presentation about an online Mari corpus. Their goal was to make a modern English-language textbook for Mari, for people who do not have access to native speakers. I was happy to see they used a free/copyleft Creative Commons license. I asked him whether they considered Wiktionary. He told me he had discussed with a person from Wiktionary who was against an import. I will be reaching my contacts and see whether an another attempt will succeed. The automatic transliteration between Latin, Cyrillic and IPA was nice, as I have been entertaining the idea of doing transliteration from Swedish to Finnish for WikiTalk, to make it able to function in Swedish as well by only using Finnish speech components. One point sticks with me: they had to add information about verb complements themselves, as they were not recorded in their sources. I can sympathize with them based on my own language learning experiences.

Stig-Arne Grönroos’ presentation on Low-resource active learning of North Sámi morphological segmentation did not contain any surprises for me after having been exposed to this topic previously. All efforts to support languages where we have to cope with limited resources are welcome and needed. Intermediate results are better than working with nothing while waiting for a full morphological analyser, for example. It is not completely obvious to me how this tool can be used in other language technology applications, so I will be happy to see an example.

Miikka Silfverberg presented about OCR, using OMorFi: can morphological analyzers improve the quality of optical character recognition? To summarize heavily, OCR performed worse when OMorFi was used, compared to just taking the top N most common words from Wikipedia. I understood this is not exactly the same problem of large number of readings generated by morphological analyser, rather something different but related.

## February 21, 2015

### Viikon VALO

#### 5x08 TiddlyWiki5 - Viikon VALO #216

TiddlyWiki5 on selaimella käytettävä yhden tiedoston muistiinpanotyökalu.
TiddlyWiki5 on uudelleen toteutettu versio aiemmin esitellystä TiddlyWikistä. Se on selainpohjaisena yhden sivun sovelluksena (Single Page Application, SPA) toteutettu wiki-tyyppinen muistiinpanotyökalu. Siinä muistiinpanoja ei tarvitse järjestää lineaarisesti vaan yksittäisiä asiasisältöjä, tiddlereitä, voi linkittää vapaasti toisiinsa. Sen versionumero 5 viittaa toteutukseen käytettyihin HTML5-tekniikoihin. TiddlyWiki5 muodostuu yhdestä HTML-tiedostosta ja siihen sisällytetystä Javascript-ohjelmistosta sekä wikin datasisällöstä. Kun tiedosto avataan selaimella, näyttää Javascriptillä toteutettu ohjelmisto käyttöliittymän, jolla TiddlyWikin sisältöä voi selata ja jolla siihen voi muokata uutta sisältöä. Tallennus tapahtuu samaan HTML-tiedostoon ja se on toteutettu selaimesta riippuen muutamalla eri tavalla. Yhden tiedoston sovelluksena TiddlyWiki5 on helppo kuljettaa mukana USB-tikulla tai säilyttää verkossa pilvitallennuspalvelussa.

TiddlyWiki5:n käyttöliittymä koostuu yksittäisiä tiddlereitä näyttävästä alueesta sekä niiden selaamiseen ja muokkaamiseen käytettävästä sivupalkista. Tiddlereiden sisällöt kirjoitetaan oletuksena TiddlyWiki5:n omalla wiki-kielellä, josta löytyvät tyypillisesti käytetyt muotoilut, kuten otsikot, listat, lihavoinnit, kursivoinnit, alleviivaukset, taulukot sekä muita muotoiluja. Wiki-kieli on jonkin verran kehittynyt TiddlyWikin aiemmasta versiosta. Varsinaisten wiki-muotoilujen lisäksi tekstin seassa on mahdollista käyttää TiddlyWikin makroja ja widget-sovelmia, joilla tiddlereihin voi rakentaa yksinkertaisia käyttöliittymiä, kuten välilehtiä ja nappuloita. TiddlyWiki5:n tiddlereiden sisällöt ovat oletuksena wiki-tekstiä, mutta muita vaihtoehtoja ovat esimerkiksi SVG-muotoinen vektorigrafiikka, GIF-, ICO-, JPG- ja PNG-muotoiset kuvat, CSS-tyylit sekä JSON-muotoinen data. TiddlyWiki5 tukee myös sisällön salasanasuojattua kryptausta.

TiddlyWiki5 on monipuolisesti laajennettavissa sen plugin-arkkitehtuurin ansiosta. Lisäosat voivat tuoda TiddlyWiki5-tiedostoon uusia toiminnallisuuksia, kuten uusia sisältötyyppejä, ominaisuuksia wiki-parseriin tai erilaisia makroja ja widgettejä. Uusia lisäosia omaan TiddlyWiki5-tiedostoon voi tuoda joko hiirellä raahaamalla jostain toisesta TiddlyWiki5-tiedostosta tai kirjoittamalla niitä itse. Lisäosat muodostuvat käytännössä yhdestä tai useammasta tiddleristä, joissa voi olla esimerkiksi Javascript-ohjelmakoodia tai muita määrittelyitä. Esimerkkejä hyödyllisistä lisäosista ovat esimerkiksi KaTeX-lisäosa LaTeX-kielellä kirjoitettujen matemaattisten kaavojen näyttämiseen, Markdown-lisäosa, joka lisää Markdown-merkintäkielen vaihtoehtoiseksi tavaksi kirjoittaa tiddlereiden sisältöä, sekä highlight-lisäosa, jolla tiddlereihin kirjoitettuihin ohjelmakoodia sisältäviin osioihin saadaan syntaksin korostus.

Koska yhden tiedoston sovelluksessa sekä itse ohjelma että sillä luotu sisältö ovat samassa HTML-tiedostossa, pitää TiddlyWiki5:n pystyä tallentamaan itsensä jotenkin. Koska selaimessa suoritettavalla Javascript-ohjelmalla ei ole oikeuksia kirjoittaa käyttäjän tietokoneen levylle, on TiddlyWikiin täytynyt keksiä muita tapoja hoitaa tallennus. Firefox-selainta käytettäessä sujuvin tapa on käyttää selaimeen ladattavaa TiddlyFox-lisäosaa. Tällöin TiddlyWiki5 pyytää aina tallennusta tehdessään lisäosaa hoitamaan varsinaisen levylle kirjoittamisen. Muilla HTML5:ttä tukevilla selaimilla tallennus tapahtuu siten, että tallennuskuvake on todellisuudessa latauslinkki, jolla käyttäjä lataa itselleen ohjelman generoiman uuden version HTML-tiedostosta. Selaimesta ja sen asetuksista riippuen joko käyttäjä joutuu itse valitsemaan tallennettavan tiedoston paikan ja nimen tai tiedosto tallennetaan automaattisesti käyttäjän oman kotihakemiston "Ladatut"-kansioon (tai "Downloads"). Kolmantena vaihtoehtona on käyttää TiddlyWiki5:ttä Node.js-ohjelman kautta. Tällöin Node.js:ään asennettu tiddlywiki-moduli toimii käyttäjän tietokoneella paikallisena palvelimena, jonka tarjoamaa sivua käyttäjä voi muokata. Neljäs vaihtoehto on käyttää TiddlyDesktop-sovellusta, joka on node-webkit-pohjainen sovellus TiddlyWiki5:n käyttöön.

Lisäksi on tarjolla palveluita, kuten Tiddlyspot, jotka tarjoavat muokattavan TiddlyWikin verkossa.

Kotisivu
http://tiddlywiki.com
Lähdekoodi
https://github.com/Jermolene/TiddlyWiki5
TiddlyDesktop
https://github.com/Jermolene/TiddlyDesktop
Lisenssi
BSD (3 kohdan BSD-lisenssi)
Toimii seuraavilla alustoilla
Selaimet
Asennus
Videot
TiddlyWiki5:n esittely
Esittely TiddlyDesktopin käytöstä

Teksti: Pesasa
Kuvakaappaukset: Pesasa

## February 19, 2015

### Viikon VALO

#### 5x07 PicoCMS - Viikon VALO #215

PicoCMS on kevyt ja yksinkertainen tietokannaton sisällönhallintajärjestelmä www-sivujen ylläpitoon.
Joidenkin www-sivustojen ylläpitoon tietokantaa käyttävät sisällönhallintajärjestelmät, kuten Drupal, Wordpress ja Joomla, vaikuttavat yliampuvilta. Toisaalta, jo muutamastakin sivusta koostuvan kokonaisuuden ylläpito staattisina HTML-tiedostoina voi käydä työlääksi ilman työkaluja, jos niiden pitäisi olla yhtenäisiä, samalla sivupohjalla tehtyjä ja sisältää samoja yhteisiä osia, kuten navigointi. Tällaiseen käyttöön voi sopia tietokannattomasti pelkillä tiedostoilla toteutettu Pico. Picolla tehty sivusto koostuu melko yksinkertaisesta PHP-kielellä toteutetusta Pico-sisällönhallintajärjestelmästä, muokattavissa olevasta sivupohjasta sekä hakemistosta, johon sisältö kirjoitetaan Markdown-merkkauskielisinä tekstitiedostoina. Pico soveltuu erityisesti käyttäjille, jotka eivät pelkää tekstitiedostojen muokkausta ja niiden siirtämistä palvelimelle tiedostojensiirto-ohjelmalla. Pico ei oletusasennuksessaan sisällä lainkaan verkkoselaimen kautta tapahtuvaa kirjautumista ja sivujen muokkausta.

Picoon sisältö kirjoitetaan Markdown-kielellä, joka muistuttaa läheisesti tekstimuotoisissa sähköposteissakin käytettyä varsin luonnollista tapaa muotoilla tekstiä. Sama Markdown on käytössä myös esimerkiksi GitHub- ja Stack Overflow -palveluissa. Satunnaisia vaativampia muotoiluja voi Markdown-tekstin sekaan lisätä myös tavallisena HTML-tekstinä. Sisältöä voi myös ryhmitellä muodostamalla hakemistorakenteita.

Sivuston sivupohjien toteuttamiseen Pico käyttää Twig-moottoria, jolla esimerkiksi blogisivuston artikkeliluettelon toteuttaminen lyhennelmineen on melko helppoa. Markdown-muotoisiin sivutiedostoihin voi lisätä kommenttilohkoon erinäisiä sivupohjissa käytettäviä tietoja, kuten sivun otsikon, päivämäärän sekä sivun näyttämiseen käytettävän sivupohjan.

Picon perusasennus on ominaisuuksiltaan melko suppea ja useimmat tarvittavat ominaisuudet voikin lisätä siihen lisäosina eli plugineina. Lisäosina Picoon ovat saatavilla muun muassa tuki RSS-syötteelle, tag-järjestelmä, automaattisesti luotujen sivulistojen sivuttaminen, monipuolisempi navigointijärjestelmä sekä online-editori. Lisäosia on mahdollista tehdä myös itse ja niillä voidaan esimerkiksi tuottaa sivupohjissa hyödynnettäviä muuttujia.

Kotisivu
http://picocms.org/
Lähdekoodi
https://github.com/picocms/Pico
Lisenssi
MIT
Toimii seuraavilla alustoilla
PHP
Asennus
Ohjelmisto on ladattavissa ohjelman kotisivuilta tai GitHub-palvelusta ja sen voi asentaa www-palvelimelle, jolla voi käyttää PHP-kieltä (versio 5.3 tai uudempi).

Teksti: Pesasa
Kuvakaappaukset: Pesasa

### Niklas Laxström

#### Prioritizing MediaWiki’s translation strings

After a very long wait, MediaWiki’s top 500 most important messages are back at translatewiki.net with fresh data. This list helps translators prioritize their work to get most out of their effort.

## What are the most important messages

In this blog post the term message means a translatable string in a software; technically, when a message is shown to users, they see different strings depending on the interface language.

MediaWiki software includes almost 5.000 messages (~40.000 words), or almost 24.000 messages (~177.000 words) if we include extensions. Since 2007, we make a list of about 500 messages which are used most frequently.

Why? If translators can translate few hundreds words per hour, and translating messages is probably slower than translating running text, it will take weeks to translate everything. Most of our volunteer translators do not have that much time.

Assuming that the messages follow a long tail pattern, a small number of messages are shown* to users very often, like the Edit button at the top of page in MediaWiki. On the other hand, most messages are only shown on rare error conditions or are part of disabled or restricted features. Thus it makes sense to translate the most visible messages first.

Concretely, translators and i18n fans can monitor the progress of MediaWiki localisation easily, by finding meaningful numbers in our statistics page; and we have an clear minimum service level for new locales added to MediaWiki. In particular, the Wikimedia Language committee requires that at very least all the most important messages are translated in a language before that language is given a Wikimedia project subdomain. This gives an incentive to kickstart the localisation in new languages, ensures that users see Wikimedia projects mostly in their own language and avoids linguistic colonialism.

The screenshot shows an example page with messages replaced by their key instead of their string content. Click for full size.

## Some history and statistics

The usage of the list for monitoring was fantastically impactful in 2007 and 2009 when translatewiki.net was still ramping up, because it gave translators concrete goals and it allowed to streamline the language proposal mechanism which had been trapped into a dilemma between a growing number of requests for language subdomains and a growing number of seemingly-dead open subdomains. There is some more background on translatewiki.net.

Languages with over 99 % most used messages translated were:

There is much more to do, but we now have a functional tool to motivate translators! To reach the peak of 2011, the least translated language among the first 181 will have to translate 233 messages, which is a feasible task. The 300th language is 30 % translated and needs 404 more translations. If we reached such a number, we could confidently say that we really have Wikimedia projects in 280+ languages, however small.

* Not necessarily seen: I’m sure you don’t read the whole sidebar and footer every time you load a page in Wikipedia.

## Process

At Wikimedia, first, for about 30 minutes we logged all requests to fetch certain messages by their key. We used this as a proxy variable to measure how often a particular message is shown to the user, which again is a proxy of how often a particular message is seen by the user. This is in no way an exact measurement, but I believe it good enough for the purpose. After the 30 minutes, we counted how many times each key was requested and we sorted by frequency. The result was a list containing about 17.000 different keys observed in over 15 million calls. This concluded the first phase.

In the second phase, we applied a rigorous human cleanup to the list with the help of a script, as follows:

1. We removed all keys not belonging to MediaWiki or any extension. There are lots of keys which can be customized locally, but which don’t correspond to messages to translate.
2. We removed all messages which were tagged as “ignored” in our system. These messages are not available for translation, usually because they have no linguistic content or are used only for local site-specific customization.
3. We removed messages called less than 100 times in the time span and other messages with no meaningful linguistic content, like messages where there are only dashes or other punctuation which usually don’t need any changes in translation.
4. We removed any messages we judged to be technical or not shown often to humans, even though they appeared high in this list. This includes some messages which are only seen inside comments in the generated HTML and some messages related to APIs or EXIF features.

Finally, some coding work was needed by yours truly to let users select those messages for translation at translatewiki.net.

## Discoveries

In this process some points emerged that are worth highlighting.

• 310 messages (62 %) of the previous list (from 2011) are in the new list as well. Superseded user login messages have now been removed.
• Unsurprisingly, there are new entries from new highly visible extensions like MobileFrontend, Translate, Collection and Echo. However, except a dozen languages, translators didn’t manage to keep up with such messages in absence of a list.
• I just realized that we are probably missing some high visibility messages only used in the JavaScript side. That is something we should address in the future.
• We slightly expanded the list from 500 to 600 messages, after noticing there were few or no “important” messages beyond this point. This will also allow some breathing space to remove messages which get removed.
• We did not follow a manual passage as in the original list, which included «messages that are not that often used, but important for a proper look and feel for all users: create account, sign on, page history, delete page, move page, protect page, watchlist». A message like “watchlist” got removed, which may raise suspicions: but it’s “just” the HTML title of Special:Watchlist, more or less as important as the the name “Special:Watchlist” itself, which is not included in the list either (magic words, namespaces or special pages names are not included). All in all, the list seems plausible.

## Conclusion

Finally, the aim was to make this process reproducible so that we could do it yearly, or even more often. I hope this blog post serves as a documentation to achieve that.

I want to thank Ori Livneh for getting the key counts and Nemo for curating the list.

## February 16, 2015

### Niklas Laxström

#### IWCLUL event report 1/3: the story

IWCLUL is short for International Workshop on Computational Linguistics for Uralic Languages. I attended the conference, held on January 16th 2015, and presented a joint paper with Antti on Multilingual Semantic MediaWiki for Finno-Ugric dictionaries at the poster session.

I attentively observe the glimmering city lights of Tromsø as the plane lands in darkness to orientate myself to the maps I studied on my computer before the trip. At the airport I receive a kind welcome by Trond, in Finnish, together with a group of other people going to the conference. While he is driving us into our hotels, Trond elaborates the sights of the island we pass by. I and Antti, who is co-author of our paper about Multilingual Semantic MediaWiki, check in to the hotel and joke about the tendency of forgetting posters in different places.

Next morning I meet Stig-Arne at breakfast. We decide to go see the local cable car. We wander around the city center until we finally find a place where they sell bus tickets. We had asked a few people but they gave conflicting different directions. We take the bus and then Fjellheisen, the cable car, to the top. The sights are wonderful even in winter. I head back, do some walking in the center. I buy some postcards and use that as an excuse to get inside and warm up.

On Friday, on the conference day, almost by miracle, we end up in the conference place without too many issues, despite seeing no signs in the University of Tromsø campus. More information of the conference itself will be provided in the following parts. And the poster? We forgot to take it with us from the social event after the conference.

## February 12, 2015

### Ubuntu-blogi

#### Uusia Ubuntu-tuotteita: Dell-tehokannettava ja Bq-puhelin

Uusia Ubuntu-tuotteita on nyt saatavilla myös Suomessa.

### Dell Precision M3800 Developer Edition

Markkinoiden ohuimman ja kevyimmän 15 tuuman tehokannettavan, Precision M3800 -työaseman uusia ominaisuuksia ovat muun muassa 4K Ultra HD  -kosketusnäyttö, Thunderbolt 2 -teknologia ja Ubuntu-pohjainen versio ohjelmistokehittäjille.

Thunderbolt 2 -portti mahdollistaa 20 Gbps:n datasiirtonopeuden ja tukee korkean resoluution ja suorituskyvyn näyttöjä ja muita laitteita. Lisäksi Dell on lisännyt M3800-tehokannettavaan tallennusvaihtoehtoja ja nostanut saatavilla olevan sisäisen levyaseman kokoa 2 teratavuun. Kannettavan paksuus on noin 18 mm ja painoa on noin 1,88 kg. Kannettava sisältää Intel Core i7 -neliydinsuorittimen, NVIDIA Quadro K1100M -näytönohjaimen ja keskusmuistia 16 Gt.

Precision M3800 -kannettava on saatavilla Ubuntu-, Windows 7 -ja Windows 8.1 -käyttöjärjestelmillä. Dell tarjoaa ensimmäisen kerran Precision M3800 -kannettavasta Ubuntu-pohjaisen kehittäjäversion. Ubuntu-kehittäjäversio on tilattavissa suoraan Dell Suomen puhelinpalvelusta (yritykset) tai tilaamalla Dell-jälleenmyyjiltä (myös kuluttajat, toimitukset alkavat maaliskuussa) kautta maan.

Seuraavaksi odotellaan tietoja Dell XPS 13 Developer Editionin uusimmasta versiosta, jonka ei pitäisi tulla kovin paljon M3800:aa perässä. Odotettavissa on markkinoiden paras Ultrabook virallisella Ubuntu-tuella.

### Maailman ensimmäinen Ubuntu-puhelin: Bq Aquaris E4.5 Ubuntu Edition

Bq Aquaris E4.5 Ubuntu Edition on maailman ensimmäinen Ubuntu-puhelin, ja saatavilla nyt rajoitetusti Euroopassa. Ubuntu-puhelimissa käytetään Ubuntun uutta Unity 8 -käyttöliittymää ja sen erityisesti näkymiin (scopes) pohjautuvaa käyttötapaa. Perinteisistä sovelluksista mukana on muun muassa Telegram, HERE-kartat, Cut the Rope sekä Facebook- ja Twitter-websovellukset. Työpöytä-Ubuntun tapaan Ubuntu-puhelimetkin ovat markkinoiden vahvimmin avointa lähdekoodia käyttäviä massamarkkinatuotteita, joten myös kehittäjät pääsevät mukaan kaikilla tasoilla sovelluskehityksen lisäksi.

Myynti on alkanut eilen tapahtuneella pikamyyntitapahtumalla (Flash Sale). Myyntiin liittyi aluksi suuria teknisiä ongelmia, eikä Suomi ollut aluksi maissa joihin toimitetaan. Iltapäiväksi luvattiin uusi erä – tällä kertaa Suomi oli jo mukana, ja tekniset ongelmat olivat poissa, mutta myyntierä loppui jo 4 minuutissa. Suurin osa kiinnostuneista jäi ilman, mikä aiheutti harmitusta ympäri nettiä. Myyntitapahtuman ei kaiken kaikkiaan voi sanoa menneen aivan putkeen. Seuraava myyntierä lienee ensi viikolla, mistä tiedotetaan Bq:n ja Ubuntun Twitter-tileillä. Bq ilmoitti jälkikäteen saaneensa yli 12 000 tilausta minuutissa.

Bq:n saatavuudesta muuten kuin pikamyynnein ei ole vielä tarkkaa tietoa. Operaattorikumppaneiksi on tässä vaiheessa ilmoitettu 3 Sweden, amena.com (Espanja), giffgaff (Iso-Britannia) ja Portugal Telecom, ja näistäkään ei tiedetä juuri muuta kuin nimet.

Keskustelua Bq:sta yms. muun muassa Ubuntu Suomen keskustelualueiden uudella mobiililaitteet-alueella.

Puhelinpuolella seuraavaksi odotellaan Meizu-valmistajaa, jonka huhutaan esittelevän Ubuntu-puhelintaan MWC-messuilla maaliskuun alussa.

## January 31, 2015

### Niklas Laxström

#### GNU i18n for high priority projects list

Today,  for a special occasion, I’m hosting this guest post by Federico Leva, dealing with some frequent topics of my blog.

A special GNU committee has invited everyone to comment on the selection of high priority free software projects (thanks M.L. for spreading the word).

In my limited understanding from looking every now and then in the past few years, the list so far has focused on “flagship” projects which are perceived to the biggest opportunities, or roadblocks to remove, for the goal of having people only use free/libre/open source software.

A “positive” item is one which makes people want to embrace GNU/Linux and free software in order to user it: «I want to use Octave because it’s more efficient». A “negative” item is an obstacle to free software adoption, which we want removed: «I can’t use GNU/Linux because I need AutoCAD for work».

We want to propose something different: a cross-fuctional project, which will benefit no specific piece of software, but rather all of them. We believe that the key for success of each and all the free software projects is going to be internationalization and localization. No i18n can help if the product is bad: here we assume that the idea of the product is sound and that we are able to scale its development, but we “just” need more users, more work.

## What we believe

If free software is about giving control to the user, we believe it must also be about giving control of the language to its speakers. Proper localisation of a software can only be done by people with a particular interest and competence in it, ideally language natives who use the software.

It’s clear that there is little overlap between this group and developers; if nothing else, because most free software projects have at most a handful developers: all together, they can only know a fraction of the world’s languages. Translation is not, and can’t be, a subset of programming. A GNOME dataset showed a strong specialisation of documenters, coders and i18n contributors.

We believe that the only way to put them in control is to translate the wiki way: easily, the only requirement being language competency; with no or very low barriers on access; using translations immediately in the software; correcting after the fact thanks to their usage, not with pre-publishing gatekeeping.

Translation should not be a labyrinth

In most projects, the i18n process is hard to join and incomprehensible, if explained at all. GNOME has a nice description of their workflow, which however is a perfect example of what the wiki way is not.

A logical consequence of the wiki way is that not all translators will know the software like their pockets. Hence, to translate correctly, translators need message documentation straight in their translation interface (context, possible values of parameters, grammatical role of words, …): we consider this a non-negotiable feature of any system chosen. Various research agrees.

Ok, but why care?

## I18n is a recipe for success

First. Developers and experienced users are often affected by the software localisation paradox, which means they only use software in English and will never care about l10n even if they are in the best position to help it. At this point, they are doomed; but the computer users of the future, e.g. students, are not. New users may start using free software simply because of not knowing English and/or because it’s gratis and used by their school; then they will keep using it.

With words we don’t like much, we could say: if we conquer some currently marginal markets, e.g. people under a certain age or several countries, we can then have a sufficient critical mass to expand to the main market of a product.

Research is very lacking on this aspect: there was quite some research on predicting viability of FLOSS projects, but almost nothing on their i18n/l10n and even less on predicting their success compared to proprietary competitors, let alone on the two combined. However, an analysis of SourceForge data from 2009 showed that there is a strong correlation between high SourceForge rank and having translators (table 5): for successful software, translation is the “most important” work after coding and project management, together with documentation and testing.

Second. Even though translation must not be like programming, translation is a way to introduce more people in the contributor base of each piece of software. Eventually, if they become more involved, translators will get in touch with the developers and/or the code, and potentially contribute there as well. In addition to this practical advantage, there’s also a political one: having one or two orders of magnitude more contributors of free software, worldwide, gives our ideas and actions a much stronger base.

Practically speaking, every package should be i18n-ready from the beginning (the investment pays back immediately) and its “Tools”/”Help” menu, or similarly visible interface element, should include a link to a website where everyone can join its translation. If the user’s locale is not available, the software should actively encourage joining translation.

Arjona Reina et al. 2013, based on the observation of 41 free software projects and 22 translation tools, actually claim that recruiting, informing and rewarding the translators is the most important factor for success of l10n, or even the only really important one.

Exton, Wasala et al. also suggest to receive in situ translations in a “crowdsourcing” or “micro-crowdsourcing” limbo, which we find superseded by a wiki. In fact, they end up requiring a “reviewing mechanism such as observed in the Wikipedia community” anyway, in addition to a voting system. Better keep it simple and use a wiki in the first place.

Third. Extensive language support can be a clear demonstration of the power of free software. Unicode CLDR is an effort we share with companies like Microsoft or Apple, yet no proprietary software in the world can support 350 languages like MediaWiki. We should be able to say this of free software in general, and have the motives to use free software include i18n/l10n.

Research agrees that free software is more favourable for multilingualism because compared to proprietary software translation is more efficient, autonomous and web-based (Flórez & Alcina, 2011; citing Mas 2003, Bowker et al. 2008).

The obstacle here is linguistic colonialism, namely the self-disrespect billions of humans have for their own language. Language rights are often neglected and «some languages dominate» the web (UNO report A/HRC/22/49, §84); but many don’t even try to use their own language even where they could. The solution can’t be exclusively technical.

Fourth. Quality. Proprietary software we see in the wild has terrible translations (for example Google, Facebook, Twitter). They usually use very complex i18n systems or they give up on quality and use vote-based statistical approximation of quality; but the results are generally bad. A striking example is Android, which is “open source” but whose translation is closed as in all Google software, with terrible results.

How to reach quality? There can’t be an authoritative source for what’s the best translation of every single software string: the wiki way is the only way to reach the best quality; by gradual approximation, collaboratively. Free software can be more efficient and have a great advantage here.

Indeed, quality of available free software tools for translation is not a weakness compared to proprietary tools, according to the same Flórez & Alcina, 2011: «Although many agencies and clients require translators to use specific proprietary tools, free programmes make it possible to achieve similar results».

## We are not there yet

Many have the tendency to think they have “solved” i18n. The internet is full of companies selling i18n/10n services as if they had found the panacea. The reality is, most software is not localised at all, or is localised in very few languages, or has terrible translations. Explaining the reasons is not the purpose of this post; we have discussed or will discuss the details elsewhere. Some perspectives:

A 2000 survey confirms that education about i18n is most needed: «There is a curious “localisation paradox”: while customising content for multiple linguistic and cultural market conditions is a valuable business strategy, localisation is not as widely practised as one would expect. One reason for this is lack of understanding of both the value and the procedures for localisation.»

## Can we win this battle?

We believe it’s possible. What above can look too abstract, but it’s intentionally so. Figuring out the solution is not something we can do in this document, because making i18n our general strength is a difficult project: that’s why we argue it needs to be in the high priority projects list.

The initial phase will probably be one of research and understanding. As shown above, we have opinions everywhere, but too little scientific evidence on what really works: this must change. Where evidence is available, it should be known more than it currently is: a lot of education on i18n is needed. Sharing and producing knowledge also implies discussion, which helps the next step.

The second phase could come with a medium term concrete goal: for instance, it could be decided that within a couple years at least a certain percentage of GNU software projects should (also) offer a modern, web-based, computer-assisted translation tool with low barriers on access etc., compatible with the principles above. Requirements will be shaped by the first phase (including the need to accommodate existing workflows, of course).

This would probably require setting up a new translation platform (or giving new life to an existing one), because current “bigs” are either insufficiently maintained (Pootle and Launchpad) or proprietary. Hopefully, this platform would embed multiple perspectives and needs of projects way beyond GNU, and much more un-i18n’d free software would gravitate here as well.

A third (or fourth) phase would be about exploring the uncharted territory with which we share so little, like the formats, methods and CAT tools existing out there for translation of proprietary software and of things other than software. The whole translation world (millions of translators?) deserves free software. For this, a way broader alliance will be needed, probably with university courses and others, like the authors of Free/Open-Source Software for the Translation Classroom: A Catalogue of Available Tools and tuxtrans.

## “What are you doing?”

Fair question. This proposal is not all talk. We are doing our best, with the tools we know. One of the challenges, as Wasala et al. say,  is having a shared translation memory to make free software translation more efficient: so, we are building one. InTense is our new showcase of free software l10n and uses existing translations to offer an open translation memory to everyone; we believe we can eventually include practically all free software in the world.

For now, we have added a few dozens GNU projects and others, with 55 thousands strings and about 400 thousands translations. See also the translation interface for some examples.

If translatewiki.net is asked to do its part, we are certainly available. MediaWiki has the potential to scale incredibly, after all: see Wikipedia. In a future, a wiki like InTense could be switched from read-only to read/write and become a über-translatewiki.net, translating thousands of projects.

But that’s not necessarily what we’re advocating for: what matter is the result, how much more well-localised software we get. In fact, MediaWiki gave birth to thousands of wikis; and its success is also in its principles being adopted by others, see e.g. the huge StackExchange family (whose Q&A are wikis and use a free license, though more individual-centred).

Maybe the solution will come with hundreds or thousands separate installs of one or a handful software platforms. Maybe the solution will not be to “translate the wiki way”, but a similar and different concept, which still puts the localisation in the hands of users, giving them real freedom.

What do you think? Tell us in the comments.

## January 12, 2015

### Henri Bergius

#### Nemein has a new home

When I flew to Tenerife to sail across the Atlantic in late November, there was excitement in the air. Nemein — the software company I started in 2001 with Henri Hovi and Johannes Hentunen, and left later to build an AI-driven web publishing tool — was about to be sold.

Today, I'm happy to tell that Nemein has been acquired by Anders Innovations, a fast-growing software company.

I had a videoconference this morning with Nemein's and Anders Inno's CEOs Lauri and Tomi, and it seems the team Nemein has indeed found a good home.

Technologically, the companies are a good fit. Both companies have a strong emphasis on building business systems on top of the Django framework. To this mix, Nemein will also bring its long background with Midgard CMS and mobile ecosystems like MeeGo and its successor, Sailfish.

I wish the whole team at Anders Innovation the best, and hope they will be able to continue functioning as a champion of the decoupled content management idea.

Nemein has also been a valuable contributor to the Flowhub ecosystem, which I hope will continue.

For those interested in the background of Nemein, I wrote a longish story of the company's first ten years back in 2011. I also promise to write about The Grid soon!

## December 31, 2014

### Riku Voipio

#### Crowdfunding better GCompris graphics

GCompris is the most established open source kids educational game. Here we practice use of mouse with an Efika smartbook. In this subgame, mouse is moved around to uncover a image behind.

While GCompris is nice, it needs nice graphics badly. Now the GCompris authors are running a indiegogo crowfund exactly for that - to get new unified graphics.

Why should you fund? Apart from the "I want to be nice for any oss project", I see a couple of reasons specific for this crowdfund.

First, to show kids that apps can be changed! Instead of just using existing iPad apps as a consumer, Gcompris allows you to show kids how games are built and modified. With the new graphics, more kids will play longer, and eventually some will ask if something can be changed/added..

Second, GCompris has recently become QT/QML based, making it more portable than before. Wouldn't you like to see it in your Jolla tablet or a future Ubuntu phone? The crowfund doesn't promise to make new ports, but if you are eager to show your friends nice looking apps on your platform, this probably one of the easiest ways to help them happen.

Finally, as a nice way to say happy new year 2015 :)

## December 29, 2014

### Wikimedia Suomi

#### Tule Wikimedia Suomen toiminnanjohtajaksi

Wikimedia Suomi ry hakee alkavalle vuodelle

# Toiminnanjohtajaa

Olemme aatteellinen yhdistys, joka edistää Wikimedia Foundationin hankkeiden kuten Wikipedia-tietosanakirjan käyttöä ja tuntemusta. Toimintaamme kuuluu koulutustilaisuuksia, tiedottamista, aineistojen avaamisessa avustamista ja ohjelmistojen kehittämistä. Yhdistys on perustettu 2009 ja sillä on 49 jäsentä.

Toiminnanjohtajana vastaat yhdistyksen juoksevien asioiden hoitamisesta sekä toimit muun henkilökunnan esimiehenä. Tehtäviisi kuuluu:

• Hakemusten tekeminen
• Yhteydenpito tilitoimiston ja viranomaisten kanssa käytännön asioissa
• Assistenttien ja muiden työntekijöiden ohjaaminen ja opastaminen
• Tilaisuuksien suunnittelu ja järjestäminen
• Yhdistyksen kokouksiin osallistuminen ja niiden järjestäminen tarvittaessa
• Matkustaminen ja yhdistyksen edustaminen ulkomailla tarvittaessa
• Toiminnan dokumentointi ja raportointi
• Työntekijöiden rekrytoinnin järjestely
• Projektinhallinta, projektien suunnittelu ja budjetointi, raportointi rahoittajille
• Viestintätehtävät, kuten blogin kirjoittaminen ja aineiston toimittaminen

Voit tutustua projekteihimme nettisivuillamme wikimedia.fi. Esimerkkejä niistä ovat Wikimaps, jossa historiallista kartta-aineistoa tuodaan yhteen käyttöliittymään, jossa sitä voidaan rikastaa eri tietokerroksilla sekä Tuo kulttuuri Wikipediaan, joka oli sarja koulutustilaisuuksia kulttuurin ystäville. Suurin osa, noin 60% työajasta, kuluu projektinhallintaan.

Työhön kuuluu läheistä yhteistyötä Wikipedia-harrastajien ja Wikimedia-liikkeen jäsenten kanssa. Yhteistyö on kansainvälistä ja monialaista: olet tekemisissä koodarien, pitkän linjan humanistien, lakimiesten ja museo-oppaiden kanssa saman viikon aikana ja teet heidän kanssaan hienoja asioita.

Toiminnanjohtajan työnkuva edellyttää:

• Wikimedia-hankkeiden ja  -yhteisöjen tuntemusta ja kokemusta niiden toimintakulttuurista
• Kokemusta järjestötaloudesta tai muuta taloushallinnon tuntemusta
• Järjestökentän ja kulttuurialan tuntemusta
• Kokemusta tiedottamisesta
• Kokemusta seminaari- ja koulutustilaisuuksien järjestämisestä
• Aktiivista ja itsenäistä työskentelyotetta
• Hyviä projektinhallintataitoja
• Hyviä tietoteknisiä valmiuksia
• Erinomaista kirjallista suomen kielen taitoa (yhdistys on suomenkielinen)
• Riittävän hyvää ruotsin kielen taitoa
• Sujuvaa englannin kielen taitoa ja kykyä kirjoittaa korkeatasoista ja helppolukuista englantia
• Joustavuutta työaikojen suhteen (työhon kuuluu ajoittain ilta- ja viikonlopputöitä)
• Monipuolista kokemusta viestintätehtävistä ja verkkoviestinnän tuntemusta
• Edunvalvonta-asioiden ja apurahakäytäntöjen tuntemusta
• Kykyä motivoida yhteisöjä osallistumiseen

Toiminnanjohtajaa ohjaa yhdistyksen hallitus.

Työ alkaa heti sopivan hakijan löytyessä. Alamme käsitellä hakemuksia 10.1. ja kutsumme hakijat haastatteluun tammikuun aikana. Tehtävä on määräaikainen ja voimassa 30.6. 2015 saakka, sillä työtehtävää varten saamamme rahoitus on siihen asti voimassa. Koeaika on yksi kuukausi.

Lähetä hakemuksesi sähköpostitse osoitteeseen tommi ätt wikimedia piste fi.

## December 23, 2014

### Ubuntu-blogi

#### Ensimmäinen Ubuntu-puhelin myyntiin helmikuussa

Espanjalainen Bq tuo ensimmäisen Ubuntu-puhelimen myyntiin helmikuussa Euroopassa. Tarkempia maatietoja ei ole vielä julkistettu, ja virallinen lanseeraus on vasta myöhemmin.

Jussi Kekkonen (Tm T) on yksi “sisäpiiriin” kutsutuista Ubuntu-yhteisön jäsenistä, ja hän on kirjoittanut ensitietojaan Google+:n Ubuntu Suomi -yhteisöön (suora linkki viestiin). Lisätietoja kuulemme varmasti sekä kutsutuille järjestettävästä Lontoon tilaisuudesta sekä toki virallisten tietojen ilmaantuessa.

Päivitys: Ubuntu Suomen keskustelualueille on nyt avattu uusi alue Ubuntu mobiililaitteilla – tervetuloa keskustelemaan joko yhteisöversioista tai kaupan hyllyltä löytyvistä tulevista puhelimista!

## November 07, 2014

### Riku Voipio

#### Adventures in setting up local lava service

Linaro uses LAVA as a tool to test variety of devices. So far I had not installed it myself, mostly due to assuming it to be enermously complex to set up. But thanks to Neil Williams work on packaging, installation has got a lot easier. Follow the Official Install Doc and Official install to debian Doc, roughly looking like:

1. Install Jessie into kvm

kvm -m 2048 -drive file=lava2.img,if=virtio -cdrom debian-testing-amd64-netinst.iso
2. Install lava-server
apt-get update; apt-get install -y postgresql nfs-kernel-server apache2apt-get install lava-server# answer debconf questionsa2dissite 000-default && a2ensite lava-server.conf service apache2 reloadlava-server manage createsuperuser --username default --email=foo.bar@example.com$EDITOR /etc/lava-dispatcher/lava-dispatcher.conf # make sure LAVA_SERVER_IP is right That's the generic setup. Now you can point your browser to the IP address of the kvm machine, and log in with the default user and the password you made. 3 ... 1000 Each LAVA instance is site customized for the boards, network, serial ports, etc. In this example, I now add a single arndale board. cp /usr/lib/python2.7/dist-packages/lava_dispatcher/default-config/lava-dispatcher/device-types/arndale.conf /etc/lava-dispatcher/device-types/sudo /usr/share/lava-server/add_device.py -s arndale arndale-01 -t 7001 This generates us a almost usable config for the arndale. For site specifics I have usb-to-serial. Outside kvm, I provide access to serial ports using the following ser2net config: 7001:telnet:0:/dev/ttyUSB0:115200 8DATABITS NONE 1STOPBIT7002:telnet:0:/dev/ttyUSB1:115200 8DATABITS NONE 1STOPBIT TODO: make ser2net not run as root and ensure usb2serial devices always get same name.. For automatic power reset, I wanted something cheap, yet something that wouldn't require much soldering (I'm not a real embedded engineer.. I prefer software side ;) . Discussed with Hector, who hinted about prebuilt relay boxes. Chose one from Ebay, a kmtronic 8-port USB Relay. So now I have this cute boxed nonsense hack. The USB relay is driven with a short script, hard-reset-1 stty -F /dev/ttyACM0 9600echo -e '\xFF\x01\x00' > /dev/ttyACM0sleep 1echo -e '\xFF\x01\x01' > /dev/ttyACM0 Sidenote: If you don't have or want automated power relay for lava, you can always replace this this script with something along "mpg123 puny_human_press_the_power_button_now.mp3" Both the serial port and reset script are on server with dns name aimless. So we take the /etc/lava-dispatcher/devices/arndale-01.conf that add_device.py created and make it look like: device_type = arndalehostname = arndale-01connection_command = telnet aimless 7001hard_reset_command = slogin lava@aimless -i /etc/lava-dispatcher/id_rsa /home/lava/hard-reset-1 Since in my case I'm only going to test with tftp/nfs boot, the arndale board needs only to be setup to have a u-boot bootloader ready on power-on. Now everything is ready for a test job. I have a locally built kernel and device tree, and I export the directory using the httpd available by default in debian.. Python! cd out/python -m SimpleHTTPServer Go to the lava web server, select api->tokens and create a new token. Next we add the token and use it to submit a job $ sudo apt-get install lava-tool$lava-tool auth-add http://default@lava-server/RPC2/$ lava-tool submit-job http://default@lava-server/RPC2/ lava_test.jsonsubmitted as job id: 1$ The first job should now be visible in the lava web frontend, in the scheduler -> jobs part. If everything goes fine, the relay will click in a moment and the job will finish in a few minutes. ## November 01, 2014 ### Riku Voipio #### Using networkd for kvm tap networking Setting up basic systemd-network was recently described by Joachim, and the post inspired me to try it as well. The twist is that in my case I need a bridge for my KVM with Lava server and arm/aarch64 qemu system emulators... For background, qemu/kvm support a few ways to provide network to guests. The default is user networking, which requires no privileges, but is slow and based on ancient SLIRP code. The other common option is tap networking, which is fast, but complicated to set up. Turns out, with networkd and qemu bridge helper, tap is easy to set up. $ for file in /etc/systemd/network/*; do echo $file; cat$file; done/etc/systemd/network/eth.network[Match]Name=eth1[Network]Bridge=br0/etc/systemd/network/kvm.netdev[NetDev]Name=br0Kind=bridge/etc/systemd/network/kvm.network[Match]Name=br0[Network]DHCP=yes
Diverging from Joachims simple example, we replaced "DHCP=yes" with "Bridge=br0". Then we proceed to define the bridge (in the kvm.netdev) and give it an ip via dhcp in kvm.network. From the kvm side, if you haven't used the bridge helper before, you need to give the helper permissions (setuid root or cap_net_admin) to create a tap device to attach on the bridge. The helper needs an configuration file to tell what bridge it may meddle with.
# cat > /etc/qemu/bridge.conf <<__END__allow br0__END__# setcap cap_net_admin=ep /usr/lib/qemu/qemu-bridge-helper
Now we can start kvm with bridge networking as easily as with user networking:
$kvm -m 2048 -drive file=jessie.img,if=virtio -net bridge -net nic,model=virtio -serial stdio The manpages systemd.network(5) and systemd.netdev(5) do a great job explaining the files. Qemu/kvm networking docs are unfortunately not as detailed. ## October 30, 2014 ### Wikimedia Suomi #### Bringing Cultural Heritage to Wikipedia Course participants editing Wikipedia at the first gathering at the Finnish Broadcasting Company Yle. Bring Culture to Wikipedia editathon course is already over halfway through its span. The course, co-organised by Wikimedia Finland, Helsinki Summer University and six GLAM organisations, aims to bring more Finnish cultural heritage to Wikipedia. The editathon gatherings are held at various organisation locations, where the participants get a ”look behind the scenes” – the organisations show their archives and present their field of expertise. The course also provides a great opportunity to learn basics of Wikipedia, as experienced wikipedian Juha Kämäräinen gives lectures at each gathering. Yle personnel presenting the record archives. The first course gathering was held at the Archives of the Finnish Broadcasting Company Yle on 2nd October. The course attendees got familiar with the Wikipedia editor and added information to Wikipedia about the history of Finnish television and radio. The representatives of Yle also gave a tour of the tape and record archives. Quality images that Yle opened earlier this year were added to articles. Course attendee Maria Koskijoki appreciated the possibility to get started without prior knowledge. ”The people at Yle offered themes of suitable size. I also got help in finding source material.” Cooperation with GLAMS Finnish National Gallery personnel presenting sketch archives at the Ateneum Arts Museum. This kind of course is a new model of cooperation with GLAM organisations. The other cooperating organisations are Svenska litteratursällskapet i Finland, The Finnish National Gallery, Helsinki City Library, The Finnish Museum of Photography and Helsinki Art Museum. Wikimedia Finland’s goal is to encourage organisations in opening their high-quality materials to a wider audience. There are many ways to upload media content to Wikimedia Commons. One of the new methods is using GLAMWiki Toolset for batch uploads. Wikimedia Finland invited the senior developer of the project, Dan Entous, to hold a GW Toolset workshop for the representatives of GLAMs and staff of Wikimedia Finland in Sebtember before the beginning of the course. The workshop was first of its kind outside Netherlands. Course coordinator Sanna Hirvonen says that GLAM organisations have begun to see Wikipedia as a good channel to share their specialised knowledge. “People find the information from Wikipedia more easily than from the homepages of the organisations.” This isn’t the first time that Wikimedians and culture organisations in Finland co-operate: last year The Museum of Contemporary Art Kiasma organised a 24-hour Wikimarathon in collaboration with Wikimedia Finland. Over 50 participants added information about art and artists to Wikipedia. Wiki workshops have been held at the Rupriikki Media Museum in Tampere and in Ateneum Art Museum, Helsinki. Wikipedian guiding a newcomer at the Ateneum Arts Museum. Images taken on the course can be viewed in Wikimedia Commons. All Photos by Teemu Perhiö. CC-BY-SA 4.0. ## October 16, 2014 ### Wikimedia Suomi #### Swedish Wikipedia grew with help of bots For a very long time Finland was part of Sweden. Maybe that explains why the Finns now always love to compete with Swedes. And when I noticed that Swedish Wikipedia is much bigger than the Finnish one I started to challenge people in my trainings: we can’t let the Swedes win us in this Wikipedia battle! I was curious about how they did it and later I found out they had used “secret” weapons: bots. So when I was visiting Wikimania on London on August I did some video interviews related to the subject. First Johan Jönsson from Sweden tells more about the articles created by bots and what he thinks of them: <iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="382" src="http://www.youtube.com/embed/BjFXtWAgymw?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="625"></iframe> Not everyone likes the idea of bot created articles and also Erik Zachte, Data Analyst at Wikimedia Foundation shared this feeling in the beginning. Then something happened and now he has changed his view. Learn more about this in the end of this video interview: <iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="382" src="http://www.youtube.com/embed/UWBIYMUypWA?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="625"></iframe> Now I am curious to hear your opinion about the bot created articles! Should we all follow the Swedes and grow the number of articles in our own Wikipedias? PS. There are more Wikipedia related video interviews on my YouTube channel on a play list called Wiki Wednesday. ## October 07, 2014 ### Ubuntu-blogi #### Ubuntu Suomen keskustelualueet poissa käytöstä Päivitys 30.10.2014: Keskustelualueet takaisin käytössä, päivitettynä SMF 2.x:ään. Päivitys 20.11.2014 : Keskustelujen jälkeen Canonical tukee tarvittaessa jatkossakin Ubuntu Suomen SMF 2.x -version ylläpitoa, joten keskustelualueiden tulevaisuus on turvattu riippumatta siirrytäänkö omalle palvelimelle. Canonicalin ylläpito on havainnut tietomurron Ubuntu Suomen foorumilla ja sammuttanut palvelun turvatoimena. Tietomurron aiheutti mitä todennäköisimmin käytössä ollut vanhentunut sovellusversio. Ubuntu Suomen on ollut tarkoitus siirtyä toisen sovelluksen käyttöön jo ennen tietomurtoa, sillä ohjelmisto on tiedetty vanhentuneeksi eikä se ole ollut enää Canonicalin tukema. Pyrimme saamaan uuden palvelun käyttöön mahdollisimman pian. Uudesta alustasta tai sen sijainnista ei ole vielä tietoa, mutta asiasta tiedotetaan heti kun mahdollista. Mahdollisten vanhojen foorumisisältöjen palauttamista tutkitaan (varmuuskopiot löytyvät, lähinnä kyse mihin ja missä muodossa ne laitetaan). Toistaiseksi suosittelemme käyttämään Ubuntu Suomen Google+-yhteisöä (ks. myös profiilisivuFacebook-sivua tai IRC-kanavaa #ubuntu-fi Freenode-verkossa. Voit liittyä kanavalle selaimessa toimivalla asiakasohjelmalla osoitteessa https://kiwiirc.com/client/irc.ubuntu.com:+6697/#ubuntu-fi. Lisäksi Linux.fi:n keskustelualueet ovat käytettävissä osoitteessa http://linux.fi/foorumi/. Huomioi, että vaikka IRC-kanavalla näyttää olevan useampia ihmisiä paikalla, eivät kaikki heistä välttämättä lue kanavan keskustelua jatkuvasti. Vastauksen saaminen voi kestää pitkäänkin ja toivomme että jaksat odottaa vastausta, jotta ehdimme saada tilaisuuden auttaa sinua. Maltti on valttia! Emme tiedä mitä tietoja keskustelualueilta on vuotanut, jos mitään, mutta käyttäjiltä piilotetuista tiedoista mm. sähköpostiosoitteet ja suolatut salasanojen tiivisteet ovat voineet vuotaa. Jos olet käyttänyt samaa salasanaa jossain muussa palvelussa, suosittelemme vaihtamaan sen. ## October 01, 2014 ### Wikimedia Suomi #### GLAMs and GLAMWiki Toolset GLAMWiki Toolset project is a collaboration between various Wikimedia chapters and Europeana. The goal of the project is to provide easy-to-use tools to make batch uploads of GLAM (Galleries, Libraries, Archives & Museums) content to Wikimedia Commons. Wikimedia Finland invited the senior developer of the project, Dan Entous, to Helsinki to hold a GW Toolset workshop for the representatives of GLAMs and staff of Wikimedia Finland on 10th September. The workshop was first of its kind outside Netherlands. GLAMWikiToolset training in Helsinki. Photo: Teemu Perhiö. CC-BY I took part in the workshop in the role of tech assistant of Wikimedia Finland. After the workshop I have been trying to figure out what is needed for using the toolset from a GLAM perspective. In this text I’m concentrating on the technical side of these requirements. ## What is needed for GWToolset? From a technical point of view, the use of GWToolset can be split into three sections. First there are things that must be done before using the toolset. The GWToolset requires metadata as a XML file that is structured in a certain way. The image files must also be addressable by direct URLs and the domain name of the image server must be added to the upload whitelist in Commons. The second section concerns practices in Wikimedia Commons itself. This means getting to know the templates, such as institution, photograph, artwork and other templates, as well as finding the categories that are suitable for uploaded material. For someone who is not a Wikipedian – like myself – it takes a while to get know the templates and especially the categories. The third section is actually making the uploads by using the toolset itself, which I find easy to use. It has a clear workflow and with little assistance there should be no problems for GLAMs using it. Besides, there is a sandbox called Commons Beta where one can rehearse before going public. I believe that the bottleneck for GLAMs is the first section: things that must be done before using the toolset. More precisely, creating a valid XML file for the toolset. Of course, if an organisation has a competent IT department with resources to work with material donations to Wikimedia Commons, then there is no problem. However, this could be a problem for smaller – and less resourceful – organisations. ## Converting metadata in practise Like I said, the GWToolset requires an XML file with a certain structure. As far as I know, there is no information system that could directly produce such a file. However, most of the systems are able to export metadata in XML format. Even though the exported file is not valid for GWToolset, it can be converted into such with XSLT. XSLT is designed to this specific task and it has a very powerful template mechanism for XML handling. This means that the amount of code stays minimal compared to any other options. The good news is that XML transformations are relatively easy to do. XSLT is our friend when it comes to XML manipulation. In order to learn what is needed for such transforms with real data, I made couple of practical demos. I wanted to create a very lightweight solution for transforming the metadata sets for the GWToolset. Modern web browsers are flexible application platforms and for example web-scraping can be done easily through Javascript. A browser-based solution has many advantages. The first is that every Internet user already has a browser. So there is no downloading, installing or configuring needed. The second advantage is that browser-based applications that use external datasets do not create traffic to the server where the application is hosted. Browsers can also be used locally. This allows organisations to download the page files, modify them, make conversions locally in-house, and have their materials on Wikimedia Commons. XSLT requires of course a platform to run. There is a javascript library called Saxon-CE that provides the platform for browsers. So, a web browser offers all that is needed for metadata conversions: web scraping, XML handling and conversions through XSLT, and user interface components. Of course XSLT files can also be run in any other XSLT environment, like xsltproc. ## Demos Blenda and Hugo Simberg, 1896. source: The National Gallery of Finland, CC BY 4.0 The first demo I created uses an open data image set published by the Finnish National Gallery. It consists of about one thousand digitised negatives of and by Finnish artist Hugo Simberg. The set also includes digitally created positives of images. The metadata is provided as a single XML file. The conversion in this case is quite simple, since the original XML file is flat (i.e. there are no nested elements). Basically the original data is passed through as it is with few exceptions. The “image” element in original metadata includes only an image id, which must be expanded to a full URL. I used a dummy domain name here, since images are available as a zip-file and therefore cannot be addressed individually. Another exception is the “keeper” element, which holds the name of the owner organisation. This was changed from the Finnish name of the National Gallery to a name that corresponds to their institutional template name in Wikimedia Commons. example record: http://opendimension.org/wikimedia/simberg/xml/simberg_sample.xml source metadata: http://www.lahteilla.fi/simberg-data/#/overview conversion demo: http://opendimension.org/wikimedia/simberg/ direct link to the XSLT: http://opendimension.org/wikimedia/simberg/xsl/simberg_clean.xsl Photo: Signe Brander. source: Helsinki City Museum, CC BY-ND 4.0 In the second demo I used the materials provided by the Helsinki City Museum. Their materials in Finna are licensed with CC-BY-ND 4.0. Finna is an “information search service that brings together the collections of Finnish archives, libraries and museums”. Currently there is no API to Finna. Finna provides metadata in LIDO format but there is no direct URL to the LIDO file. However, LIDO can be extracted from the HTML. The LIDO format is a deep format, so the conversion is mostly picking the elements from the LIDO file and placing them in a flat XML file. For example, the name of the author in LIDO is in a quite deep structure. example LIDO record: http://opendimension.org/wikimedia/finna/xml/example_LIDO_record.xml source metadata: https://hkm.finna.fi/ conversion demo: http://opendimension.org/wikimedia/finna/ (Please note that the demo requires that the same-origin-policy restrictions are loosened in the browser. The simplest way to do this is to use Google Chrome by starting it with a switch “disable-web-security”. In Linux that would be: google-chrome — disable-web-security and Mac (sorry, I can not test this) open -a Google\ Chrome –args –disable-web-security. For Firefox see this:http://www-jo.se/f.pfleger/forcecors-workaround) direct link to the XSLT: http://www.opendimension.org/wikimedia/finna/xsl/lido2gwtoolset.xsl ## Conclusion These demos are just examples, no actual data has yet been uploaded to Wikimedia Commons. The aim is to show that XML conversions needed for GWToolset are relatively simple and that in order to use GWToolset the organisation does not have to have an army of IT-engineers. The demos could be certainly better. For example, the author name must be changed to reflect the author name in Wikimedia Commons. But again, that is just a few lines in XSLT and that is done. ## September 25, 2014 ### Wikimedia Suomi #### Avointa Suomea rakentamassa Avoin Suomi 2014, 15.-16.9.2014. Kuva: Kimmo Virtanen. CC-BY. Avoin Suomi 2014 -tapahtuma keräsi Helsingin Wanhaan Satamaan paljon erilaisia avoimen tiedon ja datan toimijoita. Wikimedia Suomi osallistui tapahtumaan näytteilleasettajana yhteisellä osastolla AvoinGLAM-verkoston kanssa. Ständillä esiteltiin Wikimedian toimintaa eri näkökulmista. GLAM-toimintaa edustivat myös Avoimen kulttuuridatan mestarikurssilla vapaaseen käyttöön avatut aineistot. Lisäksi Wikimedia osallistui eOppimiskeskuksen messuosastolle. Avoin Suomi -tapahtuman päätarkoituksena oli esitellä erilaisia avoimen datan hankkeita ja rohkaista viranomaisia avaamaan tietovarantojaan. Avoin tieto koetaan Suomen valtion tasolta selvästi tärkeäksi. Tätä havainnollistaa se, että messujen järjestäjä oli valtioneuvoston kanslia, ja avauspuheen piti pääministeri Alexander Stubb. Mitä Wikimedia sitten voi tarjota julkisen sektorin organisaatioille? Wikimedia tekee avointa tietoa käytännön tasolla. Wikimedian projektit Wikipedia ja mediatiedostojen jakoon tarkoitettu Commons ovat valmiiksi tunnettuja kansainvälisiä ja monikielisiä alustoja. Alustojen avulla sekä erilaiset kulttuuriorganisaatiot että hallintoviranomaiset voivat avata ja linkittää omia tietovarantojaan. Wikimedia on voittoa tavoittelematon järjestö, ja sen sivustot ovat maksuttomia ja mainoksista vapaita. Tänä syksynä Wikimedia Suomi järjestää Tuo kulttuuri Wikipediaan-koulutusta yhteistyössä kulttuuriorganisaatioiden kanssa. Wikidata on uusi tapa avata koneluettavaa dataa vapaaseen käyttöön. Wikidatasta on tulossa kattava viitetietokanta, joka sisältää Wikipediaan sisältyvät aiheet. Julkishallinnon ja tutkijoiden olisi hyödyllistä käyttää sitä viitteenä. Wikidataa tullaan käyttämään alustana esimerkiksi Britanniassa ContentMine-hankkeessa, jossa tieteellisestä kirjallisuudesta louhitaan dataa vapaaseen käyttöön. Syksyllä Wikimedia Suomi järjestää Helsingissä Wikidata-koulutuksen, josta kiinnostuneita pyydämme ilmaisemaan kiinnostuksensa täällä. Historialliset kartat ovat erinomainen esimerkki siitä, kuinka julkisen sektorin organisaatiot voivat työskennellä yhteistyössä voittoa tavoittelemattomien järjestöjen kanssa. Wikimaps on Wikimedia Suomen hanke, jossa tarkoituksena on kerätä Wikimedia Commonsiin vanhoja karttoja, vapaaehtoisvoimin sijoittaa ne koordinaatistoon ja hyödyntää niitä eri tavoin. Avoin Suomi -messuilla Wikimedian lisäksi vanhojen karttojen käyttöä esittelivät esimerkiksi Helsinki Region Infoshare ja Maanmittauslaitos, joilla molemmilla on paljon sekä historiallisia karttoja että muuta paikkatietoaineistoa. Wikimedian osasto tapahtumassa. Kuva: Kimmo Virtanen. CC-BY. Messuilla korostui toivomus, että tiedon digitalisoituminen ja hallinnon datan avaaminen johtaisivat uusiin yrityksiin ja sitä kautta talouskasvuun. Tapahtumassa esiteltiinkiin mielenkiintoisia uusia avoimen datan palveluita, kuten esimerkiksi kaupunginosien paikalliset tiedot ja uutiset yhteen paikkaan keräävä Nearhood ja ympäristöministeriön Envibase-hanke, jossa tuodaan ympäristötietoa avoimeen käyttöön. Avoimen tiedon hankkeissa tietynlaisena ongelmana on ollut, että avoimen datan yhteiskunnallista vaikutusta on usein vaikea todistaa. Erityisesti kulttuuriaineistoissa tämä on yleinen ongelma, koska helposti mitattavissa olevia taloushyötyjä ei välttämättä ole. Tapahtuman pääpuhujista yhdysvaltalainen Beth Noveck korosti, että uskoon perustuvien argumenttien sijaan avoimen datan kentällä pitäisi alkaa löytää todisteita avoimen tiedon yhteiskunnallisista ja taloudellisista vaikutuksista. Noveck esitteli Iso-Britannian ja Yhdysvaltojen hankkeita, joissa ollaan monella tavalla pidemmällä kuin Suomessa. Ehkä näistä esimerkeistä voisi löytyä Suomessakin sovellettavia ideoita. Henkilötiedot puhuttivat myös messuilla. MyData-paneelissa pohdittiin yksilön mahdollisuuksia ja rajoitteita hyödyntää omia henkilötietojaan. Open Knowledge Finland on laatinut aiheesta myös raportin. Henkilötiedot ovat mielenkiintoinen ja erilaisia mielipiteitä herättävä aihe. Toisaalta yleinen mielipide on vahvasti sen kannalla, että kansalaisilla tulisi olla oikeus hallita itsestään kerättyä tietoa. Toisaalta esimerkiksi Wikimedia Foundation on kritisoinut EU:n “right to be forgotten” -säädöksiä vahvasti, koska ne voivat johtaa lähdeaineistoja vääristävään sensuuriin. Wikimedia Suomi kiittää Samsungia, joka avuliaasti lainasi messukäyttöön tietotekniikkaa. ## September 16, 2014 ### Henri Bergius #### Flowhub Kickstarter delivery It is now a year since our NoFlo Development Environment Kickstarter got funded. Since then our team together with several open source contributors has been busy building the best possible user interface for Flow-Based Programming. When we set out on this crazy adventure, we still mostly had only NoFlo and JavaScript in mind. But there is nothing inherently language-specific in FBP or our UI, and so when people started making other runtimes compatible with the protocol we embraced the idea of full-stack flow-based programming. Here is how the runtime registration screen looks with the latest release: This hopefully highlights a bit of the possibilities of what can be done with Flowhub right now. I know there are several other runtimes that are not yet listed there. We should have something interesting to announce in that space soon! ## Live mode The Flowhub release made today includes several interesting features apart from giving private repository access to our Kickstarter backers. One I'm especially happy about is what we call live mode. The live mode, initially built by Lionel Landwerlin, enables Flowhub to discover and connect to running pieces of Flow-Based software running in different environments. With it you can monitor, debug, and modify applications without having to restart them! We made a short demo video of this in action with Flowhub, Raspberry Pi and an NFC tag. <iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/EdgeSDFd9p0" width="560"></iframe> ## Getting started Our backers should receive an email today with instructions on how to activate their Flowhub plans. For those who missed the Kickstarter, there should be another batch of Flowhub pre-orders available soon. Just like with Travis and GitHub, Flowhub is free for open source development. So, everybody should be able to start using it immediately even without a plan. If you have any questions about Flow-Based Programming or how to use Flowhub, please check out the various ways to get in touch on the NoFlo support page. ## August 13, 2014 ### Riku Voipio #### Booting Linaro ARMv8 OE images with Qemu A quick update - Linaro ARMv8 OpenEmbbeded images work just fine with qemu 2.1 as well: $ http://releases.linaro.org/14.07/openembedded/aarch64/Image$http://releases.linaro.org/14.07/openembedded/aarch64/vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img.gz$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt \ -kernel Image -append 'root=/dev/vda2 rw rootwait mem=1024M console=ttyAMA0,38400n8' \ -drive if=none,id=image,file=vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img \ -netdev user,id=user0 -device virtio-net-device,netdev=user0  -device virtio-blk-device,drive=image [    0.000000] Linux version 3.16.0-1-linaro-vexpress64 (buildslave@x86-64-07) (gcc version 4.8.3 20140401 (prerelease) (crosstool-NG linaro-1.13.1-4.8-2014.04 - Linaro GCC 4.8-2014.04) ) #1ubuntu1~ci+140726114341 SMP PREEMPT Sat Jul 26 11:44:27 UTC 20[    0.000000] CPU: AArch64 Processor [411fd070] revision 0...root@genericarmv8:~#
Quick benchmarking with age-old ByteMark nbench:
Index Qemu Foundation Host
Memory 4.294 0.712 44.534
Integer 6.270 0.686 41.983
Float 1.463 1.065 59.528
Baseline (LINUX) : AMD K6/233*
Qemu is upto 8x faster than Foundation model on Integers, but only 50% faster on Math. Meanwhile, the Host pc spends 7-40x slower emulating ARMv8 than executing native instructions.

## August 05, 2014

### Riku Voipio

#### Testing qemu 2.1 arm64 support

Qemu 2.1 was just released a few days ago, and is now a available on Debian/unstable. Trying out an (virtual) arm64 machine is now just a few steps away for unstable users:
$sudo apt-get install qemu-system-arm$ wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-arm64-disk1.img$wget https://cloud-images.ubuntu.com/trusty/current/unpacked/trusty-server-cloudimg-arm64-vmlinuz-generic$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt -kernel trusty-server-cloudimg-arm64-vmlinuz-generic \ -append 'root=/dev/vda1 rw rootwait mem=1024M console=ttyAMA0,38400n8 init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring' \-drive if=none,id=image,file=trusty-server-cloudimg-arm64-disk1.img \-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image [    0.000000] Linux version 3.13.0-32-generic (buildd@beebe) (gcc version 4.8.2 (Ubuntu/Linaro 4.8.2-19ubuntu1) ) #57-Ubuntu SMP Tue Jul 15 03:52:14 UTC 2014 (Ubuntu 3.13.0-32.57-generic 3.13.11.4)[    0.000000] CPU: AArch64 Processor [411fd070] revision 0...-snip-...ubuntu@ubuntu:~$cat /proc/cpuinfo Processor : AArch64 Processor rev 0 (aarch64)processor : 0Features : fp asimd evtstrm CPU implementer : 0x41CPU architecture: AArch64CPU variant : 0x1CPU part : 0xd07CPU revision : 0Hardware : linux,dummy-virtubuntu@ubuntu:~$
The "init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring" is ubuntu cloud stuff that will set the ubuntu user password to "randomstring" - don't use "randomstring" literally there, if you are connected to internets...

For more detailed writeup of using qemu-system-aarch64, check the excellent writeup from Alex Bennee.

## May 07, 2014

### Henri Bergius

#### Flowhub public beta: a better interface for Flow-Based Programming

Today I'm happy to announce the public beta of the Flowhub interface for Flow-Based Programming. This is the latest step in the adventure that started with some UI sketching early last year, went through our successful Kickstarter — and now — thanks to our 1 205 backers, it is available to the public.

## Getting Started

This post will go into more detail on how the new Flowhub interface works in a bit, but for those who want to dive straight in, here are the relevant links:

Make sure to read the Getting Started guides and check out the Flowhub FAQ. There is also a new video available:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/8Dos61_6sss" width="640"></iframe>

Both the web version and the Chrome app are built following the offline first philosophy, and keep everything you need stored locally inside your browser. The Chrome app and the upcoming iOS and Android builds will enable us to later introduce capabilities that are not possible inside regular browsers, like talking directly to MicroFlo runtimes over USB or Bluetooth. But other than that they're similar in features and user experience.

## New User Interface

If you read the NoFlo Update from last October, you might notice that the new Flowhub user interface looks and feels quite different from it.

This new design was implemented to improve touch-screen friendliness, as well as to give Flowhub a more focused, unique look. It also allowed us to follow some interesting UX paths that I'll explain next.

### Zooming

One typical problem in visual programming tools is that they can become quite cluttered with information. To solve this, we utilized the concept of Zooming User Interfaces, which allow us to show a clear overview of a program when zoomed out, and reveal all kinds of detail about it when zoomed in.

Zooming works with two-finger scroll on typical desktop computers, or with the pinch gesture on touch-enabled devices.

Another interface concept that we used to make interactions faster and more contextual is Pie Menus.

For example, you can easily navigate to subgraphs and component source code with the menu:

When you have selected multiple nodes, you can use the menu to group them or move them to a new subgraph:

The menu can also be used for removing edges or nodes:

You can activate the pie menu in the graph editor with a right mouse click, or with a long press on touch-enabled devices.

### Component Editor

Another new major feature is in-app component editing. If your runtime supports it, you can at any time create or modify custom components for your project and they'll become immediately available for your graphs.

The programming languages available for component creation depend on the runtime. With NoFlo these are JavaScript and CoffeeScript. With another runtime they might be C, Java, or Python.

### Offline First

While some claim that in reality you're never offline, the reality is that there are many situations where Internet connectivity is either not available, unreliable, or simply expensive. Think of a typical conference or a hackathon for instance.

Because of this — and to give software developers the privacy they deserve — Flowhub has been designed to work "offline first". All your graphs, projects, and custom components are stored locally in your browser's Indexed Database and only transmitted over the network when you wish to push to a GitHub project, or interact with a remote runtime.

We're following a very similar UI concept as Amazon Kindle in that you can download projects locally to your device, or browse the ones you have available in the cloud:

At any point you can push your changes to a graph or a component to GitHub:

Runtime discovery happens through a central service, but once you know the address of your FBP runtime, the communications between it and your browser will happen directly. This makes it easy to work with Node.js projects running on your own machine even when offline.

## Cross-platform, Full-stack

When we launched the NoFlo UI Kickstarter, we were initially only thinking about how to support NoFlo in different environments. But in the course of development we ended up defining a network protocol for FBP that enabled us to move past just a single FBP environment and towards supporting all of them. This is what prompted the Flowhub rebranding.

Since then, the number of supported FBP environments has been growing. Here is a list of the ones I'm aware of:

I hope that the developers of other FBP environments like JavaFBP and GoFlow add support for the FBP protocol soon so that they can also utilize the Flowhub interface.

## Open Source vs. Paid

As promised in our Kickstarter, the NoFlo Development Environment is an open source project available under the MIT license.

Flowhub is a branded and supported instance of that with some additional network services like the Runtime Registry.

The Flowhub plans allow us to continue development of this Flow-Based Programming toolset, as well as to provide the various network services needed for making the experience smooth.

Just like with GitHub, Flowhub provides a free environment for anybody working on public and open source projects. Private projects need a paid plan.

### Kickstarter & Pre-Ordered Plans

It is likely that many readers of this post already supported our Kickstarter or pre-ordered a Flowhub plan. Since Flowhub is still in beta, we haven't activated your plans yet. So for now, everybody is using Flowhub with the free plan.

We will be rolling out the paid plans and Kickstarter rewards towards the end of the beta testing period.

## Examples

Here are some examples of things you can build with Flowhub targeting web browsers:

For a more comprehensive cross-platform project, see my Building an Ingress Table with Flowhub post.

There is also an ongoing Google Summer of Code project to port various Meemoo apps to Flowhub. This will hopefully result in a lot more demos.

## Next Steps

The main purpose of this public beta is to allow our backers and other FBP enthusiasts an early access to the Flowhub user interface. Now we will focus on stabilization and bug fixing, aided by the NoFlo UI issue tracker. We're also gathering feedback from beta testers in form of user surveys and will utilize those to prioritize both bug fixing and feature work.

Right now the main areas of focus are:

We hope to release the stable version of Flowhub in summer 2014.

## May 02, 2014

### Henri Bergius

#### Flowhub and the GNOME Developer Experience

I've spent the last three days in the GNOME Developer Experience hackfest working on the NoFlo runtime for GNOME with Lionel Landwerlin.

What the resulting project does is give the ability to build and debug GNOME applications in a visual way with the Flowhub user interface. You can interact with large parts of the GNOME API using either automatically generated components, or hand-built ones. And while your software is running, you can see all the data passing through the connections in the Flowhub UI.

The way this works is the following:

• You install and run the noflo-gnome runtime
• The runtime loads all installed NoFlo components and dynamically registers additional ones based on GObject Introspection
• The runtime pings Flowhub's runtime registry to notify the user that it is available
• Based on the registry, the runtime becomes available in the UI
• After this, the UI can start communicating the with runtime. This includes loading and registering components, and creating and running NoFlo graphs
• The graphs are run inside Gjs

While there is still quite a bit of work to be done in exposing more of the GNOME API as flow-based components, you can already do quite a bit with this. In addition to building simple UIs with GTK+, working with Clutter animations was especially fun. With NoFlo, every running graph is "live", and so you can easily modify the various parameters and add new functionality while the software is running, and see those changes take effect immediately.

Here is a quick video of building a simple GTK application that loads a Glade user interface definition, runs it as a new desktop window, and does some signal handling:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/uyuoP3sjI6g" width="480"></iframe>

If you're interested in visual, flow-based programming on the Linux desktop, feel free to try out the noflo-gnome project!

There are still bugs to squish, documentation to write, and more APIs to wrap as components. All help in those is more than welcome.

## April 09, 2014

### Ubuntu-blogi

#### Ota Googlen palvelut käyttöön Ubuntussa!

Usein kuulen purnattavan, että kun Ubuntussa ei ole kaikkia ominaisuuksia mihin windowsissa on tottunut. Asia on ihan totta ja se on myönnettävä, mutta asiat ovat viimeaikoina menneet parempaan suuntaan. Annan tässä kirjoituksessa muutamia vinkkejä, joilla saat ubuntu -ympäristöstä vielä enemmän irti chrome -selaimen avulla.
Oikeastaan kaikki googlen palvelut ovat sidoksissa heidän Chrome ja Chromium -selaimiinsa. Itse huomasin tämän noin vuosi sitten, kun päivitin yritykseni ns. “konttorikoneen” ubuntuun. Olin juuri ottanut käyttöön Drive -pilvipalvelun ja olin turhautunut siitä, ettei Google tarjonnut natiivia työpöytäohjelmaa Ubuntulle. Windowsissa olin turhautunut ohjelman hitauteen.

Aikani googleteltuani törmäsin sivustoihin Omg! Ubuntu! sekä Omg! Chrome!  Nämä sivut ovat saman ylläpitäjän sivustoja, mutta tarjoavat päivittäin uusia artikkeleita aiheisiinsa liittyen. Syvennyin artikkeleihin paremmin – ja löysin tavan jolla tuoda googlen ohjelmat paremmin saataville työpöydällä.

Koska olen Drive -käyttäjä, halusin saada sen tarjoamat ohelmat helposti käyttöön. Tämä onnistui helposti, annan ohjeet tässä:

1) Kirjaudu chromiumiin tai chromeen sisään
2) Sovellukset -välilehdellä valitse haluamasi ohelmat klikkaamalla hiiren oikealla kuvakkeiden päällä ja valitse “luo pikakuvake”
3) Tallenna pikakuvake haluamaasi kansioon
4) Navigoi itsesi takaisin valitsemaasi kansioon, valitse kuvakkeet ja raahaa ne ubuntun työpöydän palkkiin

Tadaa! Nyt käytössäsi on googlen toimisto-ohjelmat, pilvi sekä ihan mitä vain haluat asentaa chromen sovelluskaupasta!

Tämän lisäksi tallennat tiedostosi suoraan pilveen, jolloin pääset niihin käsiksi mistä vain, eikä sinun tarvitse huolehtia varmuuskopioista.

Uskoisin tästä olevan hyötyä sellaisille käyttäjille, jotka ovat tähän saakka käyttäneet ubuntu onea (joka tulee katoamaan käytöstä lähiaikoina) ja niille, jotka ovat tuskastuneet tiedostojen ajantasalla pitämiseen. Omalla kohdallani työni on niin kovin liikkuvaa ja tarvitsen rajoittaman pääsyn tiedostoihini, ja tämä tapa on helpottanut käyttöäni huomattavasti.

## March 19, 2014

### Losca

#### Qt 5.2.1 in Ubuntu

 Ubuntu running Qt 5.2.1
Qt 5.2.1 landed in Ubuntu 14.04 LTS last Friday, hooray! Making it into a drop-in replacement for Qt 5.0.2 was not trivial. Because of the qreal change, it was decided to rebuild everything against the new Qt, so it was an all at once approach involving roughly 130 source packages while the parts were moving constantly. The landing last week meant pushing to archives around three thousand binary packages - counting all six architectures - with the total size of closer to 10 gigabytes.

The new Qt brings performance and features to base future work on, and is a solid base for the future of Ubuntu. You may be interested in the release notes for Qt 5.2.0 and 5.2.1. The Ubuntu SDK got updated to Qt Creator 3.0.1 + new Ubuntu plugin at the same time, although updates for the older Ubuntu releases is a work in progress by the SDK Team.

#### How We Got Here

Throughout the last few months before the last joint push, I filed tens of tagged bugs. For most of that time I was interested only in build and unit test results, since even tracking those was quite a task. I offered simple fixes here and there myself, if I found out a fix.

I created automated Launchpad recipe builds for over 80 packages that rely on Qt 5 in Ubuntu. Meanwhile I also kept on updating the Qt packaging for its 20+ source packages and tried to stay on top of Debian's and upstream's changes.

Parallel to this work, some like the Unity 8 and UI Toolkit developers started experimenting with my Qt 5.2 PPA. It turned out the rewritten QML engine in Qt 5.2 - V4 - was not entirely stable when 5.2.0 was released, so they worked together with upstream on fixes. It was only after 5.2.1 release that it could be said that V4 worked well enough for Unity 8. Known issues like these slowed down the start of full-blown testing.

Then everything built, unit tests passed, most integration tests passed and things seemed mostly to work. We had automated autopilot integration testing runs. The apps team tested through all of the app store to find out whether some needed fixes - most were fine without changes. On top of the found autopilot test failures and other app issues, manual testing found a few more bugs

 Some critical pieces of softwarelike Sudoku needed small fixing
Finally last Thursday it was decided to push Qt in, with a belief that the remaining issues had fixes in branches or not blockers. It turned out the real deployment of Qt revealed a couple of more problems, and some new issues were raised to be blockers, and not all of the believed fixes were really fixing the bugs. So it was not a complete success. Considering the complexity of the landing, it was an adequate accomplishment however.

#### Specific Issues

Throughout this exercise I bumped into more obstacles that I can remember, but those included:
• Not all of the packages had seen updates for months or for example since last summer, and since I needed to rebuild everything I found out various problems that were not related to Qt 5.2
• Unrelated changes during 14.04 development broke packages - like one wouldn't immediately think a gtkdoc update would break a package using Qt
• Syncing packaging with Debian is GOOD, and the fixes from Debian were likewise excellent and needed, but some changes there had effects on our wide-spread Qt 5 usage, like the mkspecs directory move
• xvfb used to run unit tests needed parameters updated in most packages because of OpenGL changes in Qt
• arm64 and ppc64el were late to be added to the landing PPA. Fixing those archs up was quite a last minute effort and needed to continue after landing by the porters. On the plus side, with Qt 5.2's V4 working on those archs unlike Qt 5.0's V8 based Qt Declarative, a majority of Unity 8 dependencies are now already available for 64-bit ARM and PowerPC!
• While Qt was being prepared the 100 other packages kept on changing, and I needed to keep on top of all of it, especially during the final landing phase that lasted for two weeks. During it, there was no total control of "locking" packages into Qt 5.2 transition, so for the 20+ manual uploads I simply needed to keep track of whether something changed in the distribution and accommodate.
One issue related to the last one was that some things needed were in progress at the time. There was no support for automated AP test running using a PPA. There was also no support on building images. If migration to Ubuntu Touch landing process (CI Train, a middle point on the way to CI Airlines) had been completed for all the packages earlier, handling the locking would have been clearer, and the "trunk passes all integration tests too" would have prevented "trunk seemingly got broken" situations I ended up since I was using bzr trunks everywhere.

#### Qt 5.3?

We are near to having a promoted Ubuntu image for the mobile users using Qt 5.2, if no new issues pop up. Ubuntu 14.04 LTS will be released in a month to the joy of desktop and mobile users alike.

It was discussed during the vUDS that Qt 5.3.x would be likely Qt version for the next cycle, to be on the more conservative side this time. It's not entirely wrong to say we should have migrated to Qt 5.1 in the beginning of this cycle and only consider 5.2. With 5.0 in use with known issues, we almost had to switch to 5.2.

Kubuntu will join the Qt 5 users next cycle, so it's no longer only Ubuntu deciding the version of Qt. Hopefully there can be a joint agreement, but in the worst case Ubuntu will need a separate Qt version packaged.

## March 18, 2014

### Henri Bergius

#### Building an Ingress Table with Flowhub

The c-base space station — a culture carbonite and a hackerspace — is the focal point of Berlin's thriving tech scene. It is also the place where many of the city's Ingress agents converge after an evening of hectic raiding or farming.

In February we came with an idea on combining our dual passions of open source software and Ingress in a new way. Jon Nordby from Bitraf hackerspace in Oslo had recently shown off the new full-stack development capabilities of Flowhub made possible by integrating my NoFlo flow-based programming framework for JavaScript and his MicroFlo giving similar abilities to microcontroller programming. So why not use them to build something awesome?

Since Flowhub is nearing a public beta, this would also give us a way to showcase some of the possibilities, as well as stress-test Flow-Based Programming in a Internet-connected hardware project. Often hackerspace projects tend to stretch from months to infinity; our experiences with NoFlo and flying drones already showed that with FBP we can easily parallelize development, challenging some of the central dogmas of the Mythical Man Month. It was worth a try to see if this would allow us to compress the time needed for such a project from a couple of months to a long weekend.

## Introducing the Ingress Table

Before the actual hackathon we had two meetings with the project team. There were many decisions to be made, starting from the size and shape of the table to the features it should have. Looking at the different tables in the c-base main hall we settled on a square table of slightly less than 1m2, as that would fit nicely in the area, and still seat the magical number of eight Ingress agents or other c-base regulars.

The tabletop would be a map of c-base and the surrounding area, and it would show the status of the portals nearby, as well as alert people sitting at it of attacks and other Ingress events of interest. Essentially, it'd be a physical world equivalent of the Intel Map.

We considered integrating a regular screen to have maximum flexibility in the face of the changing world of Ingress, but eventually decided that most people at c-base already spend much of their waking hours looking at a screen, and so we'd do something more ambient and just use a set of physical lights.

The hardware and software also needed some thought, especially since some of the parts needed might have long shipping times. Eventually we settled on the combination of a BeagleBone Black ARM computer as the brains of the system, and a LaunchPad Tiva as the microcontroller running the hardware. The computer would run NoFlo on Linux, and we'd flash the microcontroller with MicroFlo.

By the time of arriving to c-base, many Ingress agents have their phones and battery packs depleted, and so we incorporated eight USB power ports into the table design. Simply plug in your own cable and you can charge your device while enjoying the beer and the chat.

Once the plans had been set, a flurry of preparations began. We would need lots of things, ranging from wood and glass parts for the table shell, to various different electronics and computer parts for the insides. And some of these would have to be ordered from China. Would they arrive in time?

I spent the two weeks before the hackathon doing a project in Florence, and it was quite interesting to coordinate the logistics remotely. Thankfully our Berlin team did a stellar job of tracking missing shipments and collecting the things we needed!

## The hackathon

I landed in Berlin in the early evening of Friday, March 14th. After negotiating the rush hour public transport of the Tegel airport, I arrived to the space station to see most of our team already there, unpacking and getting the supplies ready for the hackathon.

At this point we essentially had only the raw materials available. Planks of wood, plates of glass and plastic. And a lot of electronics components. No assembly had yet been done, and no lines of code had been written or graphs drawn for the project.

We quickly organized the hackathon into three tracks: hardware, software, and electronics. The hardware team got themselves busy building the table shell, as that would need to be finished early so that the paint would have time to dry before we'd start assembling the electronics into it. Over the next day they'd often call the other teams over to help in holding or moving things, and also for the very important task of test-sitting the table to figure out the optimal trade-off between table height and legroom.

While the hardware guys were working, we started designing the software part of it. Some basic decisions had to be taken on how we'd get the data, and how we would filter and transform the raw portal statuses to commands to the actual lights in the table.

Eventually we settled on a NoFlo graph that would poll the portal data in, and run it through a set of transformations to find the detect the data points of interest, like portals that have changed owners or are under attack. In parallel we would run some animation loops to create a more organic, shifting feel to the whole map by having the light shining through the streets be constantly shifting and moving.

(and yes, the graph you see above is the actuall running code of the table)

Since the electronics wouldn't be working for a while still, we decided to build also a Ingress Table Emulator in HTML and NoFlo. This would give us something to test the data and our graphs while the other teams where still working on their things. This proved to be a very useful thing, as this way we were able to watch a big Ingress battle through our simulated blinking lights already in the Saturday evening, and see our emulated table go through pretty much all the different states we were interested in.

Once the table shell had been built and the paint was drying, the hardware team started preparing the other things like the map layer, the glass top, and the USB chargers.

For electronics we noticed that we had still some parts missing from the inventory, and so I had to do a quick supply run on Saturday. But once we got those, the team got into calculations and soldering.

Every project has its setbacks, and in this case it came in the form of running pre-released software. It turned out that the LaunchPad port of MicroFlo still had some issues, and so most of Sunday was spent debugging the communications protocol and tuning the components. But the end result is a much better improved MicroFlo, and eventually we got the major moment of triumph of seeing the street lights start animating for the first time. LED strips controlled by a LaunchPad Tiva, in turn controlled by animation loops running in a NoFlo graph on Node.js.

On Monday evening we convened at c-base for the final push. Street lights were ready, but there were still some issues with getting the table connected wirelessly to the space station network. And we would still need to implement the MicroFlo component for the portal lights. The latter resulting in an epic parallel programming and debugging session between Jon in Norway and Uwe in Berlin. But by the end of the evening we were able to test the full system for the first time, and carry the table to its new home.

It was time to celebrate. For an Ingress table, this meant sitting around the table enjoying cold beers, while hacking a level 8 blue portal and watching the lights change across the board as agents ventured out.

(We're still in the process of collecting media about the project. The table will look a lot more awesome in video, and I hope I'll be able to add some of those to this post soon)

Having the first running version of the table is of course a big milestone. Now we should monitor it for some time (over beer, of course) and make adjustments as necessary. There are some things that obviously need to be changed with the brightness of the lights based on the location of the table in the main hall. And of course we'll only know about the full system's robustness once it has a bit more mileage.

Since we already have a HTML emulator of the table, it might be fun to release that to the public at some point. That way agents who are not at the c-base main hall could also see what is going on with this simple interface.

An interesting area of development is also to see how the table could integrate better with the rest of the space station. There are various screens ranging from the awesome Mate Light to smaller screens and gauges everywhere. And all of that is pretty much networked and available. Maybe we could visualize some events of interest in other parts of the station. This shows of the "Internet of Things" is never finished.

So far Niantic Labs — the makers of Ingress — have limited the availability of a portal data API to few selected parties, and so for now we had to work with a third-party to get the information needed. We hope this table will be another step in convincing Niantic of the creative potential that an official, open Ingress API would unleash.

I'd like to give big thanks especially to everybody who participated in hackathon — whether on location or remotely from Oslo — as well as to those who were cheering us on. I'm also grateful to Flowhub for sponsoring the project. And of course to c-base for being an awesome place where such things can happen.

The full source code for the Ingress Table can be found from https://github.com/c-base/ingress-table

## December 18, 2013

### Ubuntu-blogi

#### Kolme syytä liittyä Suomen avointen tietojärjestelmien keskus COSSiin

Kuka valvoo Ubuntun ja muiden avoimen lähdekoodin ohjelmistojen käyttäjien etuja? Mikä järjestö on pääasiallinen avoimen lähdekoodin markkinoija ja edistäjä Suomessa? Vastaus on COSS ry.

## Kolme syytä tukea COSSia

1. COSS lisää tietoisuutta avoimesta lähdekoodista, erityisesti julkishallinnossa
2. COSS edistää suomalaisen IT-alan kasvua ja työllisyyttä kiiihdyttämällä suomalaislähtöisen teknologian menestystä
3. COSS lisää avoimen lähdekoodin osaamista koulutuksilla, tapahtumillla ja verkostoitumismahdollisuuksilla

Liity COSS ry:n kannatusjäseneksi! →

## Mikä on COSS?

Suomen avoimien tietojärjestelmien keskus – COSS ry on voittoa tavoittelematon yhdistys, joka toimii avoimen lähdekoodin, avoimen datan, avoimien rajapintojen sekä avoimien standardien edistämiseksi.

COSS on toiminut jo vuodesta 2003 ja se tunnetaan kansainvälisesti yhtenä Euroopan vanhimmista ja aktiivisimmista avoimuuden keskuksista.

COSS edistää yhteistyötä sekä yhteisöjen, yritysten että julkishallinnon välillä ja mm. järjestää tapahtumia. COSSin sivuilta löytyy valtakunnallinen kalenteri alan kaikista tapahtumista: http://coss.fi/kalenteri/

Yhdistys työskentelee tiedottamalla ja valistamalla avoimuuden periaatteista, -käytänteistä ja -teknologioista. COSS.fi on Suomen suurin alan sivusto.

### Esimerkkejä COSSin toiminnasta

• Tukee julkishallintoa kaikissa tietojärjestelmien avoimuutta edistävissä pyrkimyksissä
• Edistää avoimen lähdekoodin ratkaisuja, palveluja ja yritystoimintaa
• Tilaisuuksien järjestäminen ja tukeminen
• Tiedottaminen verkossa ja muissa medioissa
• Aktiivisen yritysverkoston ylläpitäminen: COSSin jäseninä n. 100 suomalaista avoimen lähdekoodin yritystä
• Edistää yritysten, tutkimuslaitosten ja korkeakoulujen välistä yhteistyötä
• Edistää yritysten ja kehittäjäyhteistöjen välistä yhteistoimintaa
• Lokalisointi-työryhmä suomentaa ohjelmistoja
• Linux-tapahtumapäivien järjestäminen
• Devaamo Summit -tapahtuman tukeminen
• Ylläpitää yhteistyötä alan suomalaisten ja kansainvälisten järjestöjen ja yhteisöjen välillä
• KDE-kehittäjien Akademy 2010 -tapahtuman järjestäminen Tampereella
• Yhteistyö Linux Foundationin, Free Software Foundation Europen ja monen muun kanssa
• Edistää avointa lähdekoodia, avoimia standardeja, avoimia rajapintoja ja avointa dataa
• Jakaa vuosittain Open World Hero -palkinnon

Liity COSS ry:n kannatusjäseneksi! →

Autathan COSSia saamaan lisää tukijoita jakamalla tätä viestiä sosiaalisessa mediassa!

## November 27, 2013

### Losca

#### Jolla launch party

And then for something completely different, I've my hands on Jolla now, and it's beautiful!

A quick dmesg of course is among first things to do...
[    0.000000] Booting Linux on physical CPU 0[    0.000000] Initializing cgroup subsys cpu[    0.000000] Linux version 3.4.0.20131115.2 (abuild@es-17-21) (gcc version 4.6.4 20130412 (Mer 4.6.4-1) (Linaro GCC 4.6-2013.05) ) #1 SMP PREEMPT Mon Nov 18 03:00:49 UTC 2013[    0.000000] CPU: ARMv7 Processor [511f04d4] revision 4 (ARMv7), cr=10c5387d[    0.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache[    0.000000] Machine: QCT MSM8930 CDP... click for the complete file ...
And what it has eaten: Qt 5.1!
...
qt5-qtconcurrent-5.1.0+git27-1.9.4.armv7hlqt5-qtcore-5.1.0+git27-1.9.4.armv7hlqt5-qtdbus-5.1.0+git27-1.9.4.armv7hlqt5-qtdeclarative-5.1.0+git24-1.10.2.armv7hl... click for the complete file ...
It was a very nice launch party, thanks to everyone involved.

Update: a few more at my Google+ Jolla launch party gallery

#### Background

I upgraded from Linux 3.8 to 3.11 among with newer Mesa, X.Org and Intel driver recently and I found a small workaround was needed because of upstream changes.

The upstream change was the Add "Automatic" mode for "Broadcast RGB" property, and defaulting to the Automatic. This is a sensible default, since many (most?) TVs default to the more limited 16-235, and continuing to default to Full from the driver side would mean wrong colors on the TV. I've set my screen to support the full 0-255 range available to not cut the amount of available shades of colors down.

Unfortunately it seems the Automatic setting does not work for my HDMI input, ie blacks become grey since the driver still outputs the more limited range. Maybe there could be something to improve on the driver side, but I'd guess it's more about my 2008 Sony TV actually having a mode that the standard suggests limited range for. I remember the TV did default to limited range, so maybe the EDID data from TV does not change when setting the RGB range to Full.

I hope the Automatic setting works to offer full range on newer screens and the modes they have, but that's probably up to the manufacturers and standards.

Below is an illustration of the correct setting on my Haswell CPU. When the Broadcast RGB is left to its default Automatic setting, the above image is displayed. When set to Full, the image below with deeper blacks is seen instead. I used manual settings on my camera so it's the same exposure.

#### Workaround

For me the workaround has evolved to the following so far. Create a /etc/X11/Xsession.d/95fullrgb file:

if [ "$(/usr/bin/xrandr -q --prop | grep 'Broadcast RGB: Full' | wc -l)" = "0" ] ; then/usr/bin/xrandr --output HDMI3 --set "Broadcast RGB" "Full"fi And since I'm using lightdm, adding the following to /etc/lightdm/lightdm.conf means the flicker only happens once during bootup: display-setup-script=/etc/X11/Xsession.d/95fullrgb Important: when using the LightDM setting, enable executable bits (chmod +x) to /etc/X11/Xsession.d/95fullrgb for it to work. Obviously also check your output, for me it was HDMI3. If there is no situation where it'd set back to "Limited 16:235" setting on its own, the display manager script should be enough and having it in /etc/X11/Xsession.d is redundant and slows login time down. I think for me it maybe went from 2 seconds to 3 seconds since executing xrandr query is not cheap. #### Misc Note that unrelated to Full range usage, the Limited range at the moment behaves incorrectly on Haswell until the patch in bug #71769 is accepted. That means, the blacks are grey in Limited mode even if the screen is also set to Limited. I'd prefer there would be a kernel parameter for the Broadcast RGB setting, although my Haswell machine does boot so fast I don't get to see too many seconds of wrong colors... ## August 15, 2013 ### Aapo Rantalainen #### Tikkupeli ja matematiikkaa. Kumpi voittaa 7531-tikkupelin? Miksi? Säännöt kahdelle pelaajalle. Alkuasetelma: Tikut ovat riveittäin, ensimmäisellä rivillä 7 tikkua, seuraavalla 5, sitten 3 ja viimeisellä 1. Vuorollaan pelaaja valitsee yhden rivin ja poistaa siltä haluamansa määrän tikkuja. Kuitenkin ainakin yhden. Halutessaan vaikka kaikki (eikä tietenkään enempää kuin mitä rivillä on). Häviäjä on se pelaaja joka joutuu ottamaan pelilaudalta viimeisen tikun. Saanko aloitusvuoron vai haluatko sinä aloittaa pelin? Todista. Pysähdy tähän miettimään. Vastaus alkaa: Otetaan käyttöön merkintätapa, jossa jokainen pelitilanne kuvataan neljällä numeromerkillä. Koska tikkurivien järjestyksellä ei ole merkitystä, sovitaan että numeromerkit ovat aina suurimmasta pienimpään. Eli tila 2100 = 2010 = 2001 = 1200 = 1020 = 0120 = 0210 = 0201 = 0021. Merkitään näitä kaikkia tiloja 2100:lla Nyt pelin häviämisehto on: Määritelmä I Pelaaja häviää jos hänelle tulee tila 1000. (Reunahuomautus: Jos jättäisimme nollat kokonaan merkitsemättä, tikkurivien määrä aloituksessa voisi olla jokin muukin kuin neljä.) Tässä seuraa matemaattinen todistus. ‘Lemma’ on siis ‘apulause’. Koodarit voi ajatella sen funktiokutsuna (älä sotke matematiikan funktioihin). Määrittelen aina ensin lemman, ennen kuin käytän sitä, jotta ei varmasti synny kehäpäätelmiä. Väitän että aloittaja häviää aina (tämä selviää todistuksen lopusta vasta). Väite: Kaikille (eli ∀) aloittajan siirroille löytyy (eli ∃) vastustajalta vastine, jolla aloittaja häviää. Lemma 1110: vuorossa oleva pelaaja häviää, jos hänelle tulee tila 1110. Todistus: Tekee pelaaja minkä tahansa siirron, niin seuraava tila on 1100. Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. Lemma 2200: häviää Todistus: (Pelaaja voi ottaa jommalta kummalta riviltä joko yhden tai kaksi tikkua.) Voi päätyä tilaan a) 2100 Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. b) 2000 Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. Lemma 2211: häviää Todistus: Voi päätyä tilaan a) 2210 Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää. b) 2111 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. c) 2110 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. Lemma 3210: häviää Todistus: Voi päätyä tilaan a) 3200 Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää. b) 3110 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. c) 2210 Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää. d) 2110 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. e) 2100 Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. Lemma 3300: häviää Todistus: Voi päätyä tilaan a) 3200 Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää. b) 3100 Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. c) 3000 Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. Lemma 3311: häviää Todistus: Voi päätyä tilaan a) 3310 Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää. b) 3211 Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää. c) 3111 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. d) 3110 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. Lemma 4400: häviää Todistus: Voi päätyä tilaan a) 4300 Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää. b) 4200 Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää. c) 4100 Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. d) 4000 Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. Lemma 4411: häviää Todistus: Voi päätyä tilaan a) 4410 Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää. b) 4311 Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää. c) 4211 Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää. d) 4111 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. e) 4110 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. Lemma 5500: häviää Todistus: Voi päätyä tilaan a) 5400 Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää. b) 5300 Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää. c) 5200 Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää. d) 5100 Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. e) 5000 Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. Lemma 5410: häviää Todistus: Voi päätyä tilaan a) 5400 Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää. b) 5310 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. c) 5210 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. d) 5110 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. e) 5100 Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. f) 4410 Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää. g) 4310 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. h) 4210 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. i) 4110 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. j) 4100 Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää. Lemma 5511: häviää Todistus: Voi päätyä tilaan a) 5510 Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää. b) 5411 Josta vastustaja palauttaa 4411. Lemman 4411 mukaan häviää. c) 5311 Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää. d) 5211 Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää. e) 5111 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. f) 5110 Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää. Lemma 6420: häviää Todistus: Voi päätyä tilaan a) 6410 Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää. b) 6400 Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää. c) 6320 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. d) 6220 Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää. e) 6210 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. f) 6200 Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää. g) 5420 Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää. h) 4420 Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää. i) 4320 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. j) 4220 Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää. k) 4210 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. l) 4200 Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää. Lemma 6431: häviää Todistus: Voi päätyä tilaan a) 6430 Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää. b) 6421 Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää. c) 6411 Josta vastustaja palauttaa 4411. Lemman 4411 mukaan häviää. d) 6410 Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää. e) 6331 Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää. f) 6321 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. g) 6311 Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää. h) 6310 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. i) 5431 Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää. j) 4431 Josta vastustaja palauttaa 4411. Lemman 4411 mukaan häviää. k) 4331 Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää. l) 4321 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. m) 4311 Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää. n) 4310 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. Lemma 6521: häviää Todistus: Voi päätyä tilaan a) 6520 Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää. b) 6511 Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää. c) 6510 Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää. d) 6421 Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää. e) 6321 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. f) 6221 Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää. g) 6211 Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää. h) 6210 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. i) 5521 Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää. j) 5421 Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää. k) 5321 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. l) 5221 Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää. m) 5211 Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää. n) 5210 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. Lemma 6530: häviää Todistus: Voi päätyä tilaan a) 6520 Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää. b) 6510 Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää. c) 6500 Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää. d) 6430 Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää. e) 6330 Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää. f) 6320 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. g) 6310 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. h) 6300 Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää. i) 5530 Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää. j) 5430 Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää. k) 5330 Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää. l) 5320 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. m) 5310 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. n) 5300 Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää. VÄITE: Tilanteesta 7531 häviää. Todistus: Voi päätyä tilaan a) 7530 Josta vastustaja palauttaa 6530. Lemman 6530 mukaan häviää. b) 7521 Josta vastustaja palauttaa 6521. Lemman 6521 mukaan häviää. c) 7511 Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää. d) 7510 Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää. e) 7431 Josta vastustaja palauttaa 6431. Lemman 6431 mukaan häviää. f) 7331 Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää. g) 7321 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. h) 7311 Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää. i) 7301 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. j) 6531 Josta vastustaja palauttaa 6431. Lemman 6431 mukaan häviää. k) 5531 Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää. l) 5431 Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää. m) 5331 Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää. n) 5321 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. o) 5311 Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää. p) 5310 Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää. ## July 10, 2013 ### Losca #### Latest Compiz gaming update to the Ubuntu 12.04 LTS A new Compiz window manager performance update reached Ubuntu 12.04 LTS users last week. This completes the earlier [1] [2] enabling of 'unredirected' (compositing disabled) fullscreen gaming and other applications for performance benefits. The update has two fixes. The first one fixes a compiz CPU usage regression. The second one enables unredirection also for Intel and Nouveau users using the Mesa 9.0.x stack. That means up-to-date installs from 12.04.2 LTS installation media and anyone with original 12.04 LTS installation who has opted in to the 'quantal' package updates of the kernel, X.Org and mesa *) The new default setting for the unredirection blacklist is shown in the image below (CompizConfig Settings Manager -> General -> OpenGL). It now only blacklists the original Mesa 8.0.x series for nouveau and intel, plus the '9.0' (not a point release). I did new runs of OpenArena at openbenchmarking.org from a 12.04.2 LTS live USB. For comparison I first had a run with the non-updated Mesa 9.0 from February. I then allowed Ubuntu to upgrade the Mesa to the current 9.0.3, and ran the test with both the previous version of Compiz and the new one released. 12.04.2 LTS Mesa 9.0 | Mesa 9.0.3 | Mesa 9.0.3 old Compiz | old Compiz | new Compiz OpenArena fps 29.63 | 31.90 | 35.03 Reading into the results, Mesa 9.0.3 seems to have improved the slowdown in the redirected case. That would include normal desktop usage as well. Meanwhile the unredirected performance remains about 10% higher. *) Packages linux-generic-lts-quantal xserver-xorg-lts-quantal libgl1-mesa-dri-lts-quantal libegl1-mesa-drivers-lts-quantal. 'raring' stack with Mesa 9.1 and kernel 3.8 will be available around the time of 12.04.3 LTS installation media late August. ## May 21, 2013 ### Losca #### Network from laptop to Android device over USB If you're running an Android device with GNU userland Linux in a chroot and need a full network access over USB cable (so that you can use your laptop/desktop machine's network connection from the device), here's a quick primer on how it can be set up. When doing Openmoko hacking, one always first plugged in the USB cable and forwarded network, or like I did later forwarded network over Bluetooth. It was mostly because the WiFi was quite unstable with many of the kernels. I recently found out myself using a chroot on a Nexus 4 without working WiFi, so instead of my usual WiFi usage I needed network over USB... trivial, of course, except that there's Android on the way and I'm a Android newbie. Thanks to ZDmitry on Freenode, I got the bits for the Android part so I got it working. On device, have eg. data/usb.sh with the following contents. #!/system/xbin/shCHROOT="/data/chroot"ip addr add 192.168.137.2/30 dev usb0ip link set usb0 upip route delete defaultip route add default via 192.168.137.1;setprop net.dns1 8.8.8.8echo 'nameserver 8.8.8.8' >>$CHROOT/run/resolvconf/resolv.conf
On the host, execute the following:
adb shell setprop sys.usb.config rndis,adbadb shell data/usb.shsudo ifconfig usb0 192.168.137.1sudo iptables -A POSTROUTING -t nat -j MASQUERADE -s 192.168.137.0/24echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward sudo iptables -P FORWARD ACCEPT
This works at least with Ubuntu saucy chroot. The main difference in some other distro might be whether the resolv.conf has moved to /run or not. You should be now all set up to browse / apt-get stuff from the device again.

Update: Clarified that this is to forward the desktop/laptop's network connection to the device so that network is accessible from the device over USB.
Update2, 09/2013: It's also possible to get working on the newer flipped images. Remove the "\$CHROOT" from nameserver echoing and it should be fine. With small testing it got somehow reset after a while at which point another run of data/usb.sh on the device restored connection.

## March 30, 2013

### Jouni Roivas

#### QGraphicsWidget

Usually it's easy to get things working with Qt (http://qt-project.org), but recently I encoutered an issue when trying to implement simple component derived from QGraphicsWidget. My initial idea was to use QGraphicsItem, so I made this little class:
class TestItem : public QGraphicsItem{public:        TestItem(QGraphicsItem *parent=0) : QGraphicsItem(parent) {}        void paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget = 0);        virtual QRectF boundingRect () const;protected:        virtual void mousePressEvent(QGraphicsSceneMouseEvent *event);        virtual void mouseReleaseEvent(QGraphicsSceneMouseEvent *event);};void TestItem::mousePressEvent(QGraphicsSceneMouseEvent *event){        qDebug() << __PRETTY_FUNCTION__ << "press";}void TestItem::mouseReleaseEvent(QGraphicsSceneMouseEvent *event){        qDebug() << __PRETTY_FUNCTION__ <<  "release";}void TestItem::paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget){        Q_UNUSED(option)        Q_UNUSED(widget)        painter->fillRect(boundingRect(), QColor(255,0,0,100));}QRectF TestItem::boundingRect () const{        return QRectF(-100, -40, 100, 40);}
Everything was working like expected, but in order to use a QGraphicsLayout, I wanted to derive that class from QGraphicsWidget. The naive way was to make minimal changes:
class TestWid : public QGraphicsWidget{        Q_OBJECTpublic:        TestWid(QGraphicsItem *parent=0) : QGraphicsWidget(parent) { }        void paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget = 0);        virtual QRectF boundingRect () const;protected:        virtual void mousePressEvent(QGraphicsSceneMouseEvent *event);        virtual void mouseReleaseEvent(QGraphicsSceneMouseEvent *event);};void TestWid::mousePressEvent(QGraphicsSceneMouseEvent *event){        qDebug() << __PRETTY_FUNCTION__ <<  "press";}void TestWid::mouseReleaseEvent(QGraphicsSceneMouseEvent *event){        qDebug() << __PRETTY_FUNCTION__ <<  "release";}void TestWid::paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget){        Q_UNUSED(option)        Q_UNUSED(widget)        painter->fillRect(boundingRect(), QColor(0,0,255,100));}QRectF TestWid::boundingRect () const{        return QRectF(-100, -40, 100, 40);}

Pretty straightforward, isn't it? It showed and painted things like expected, but I didn't get any mouse events. Wait what?

I spent hours just trying out things and googling this problem. I knew I had this very same issue earlier but didn't remember how I solved it. Until I figured out a very crucial thing, in case of QGraphicsWidget you must NOT implement boundingRect(). Instead use setGeometry for the object.

So the needed changes was to remote the boundingRect() method, and to call setGeometry in TestWid constructor:

setGeometry(QRectF(-100, -40, 100, 40));

After these very tiny little changes I finally got everthing working. That all thing made me really frustrated. Solving this issue didn't cause good feeling, I was just feeling stupid. Sometimes programming is a great waste of time.

## August 31, 2012

### Jouni Roivas

#### Adventures in Ubuntu land with Ivy Bridge

Recently I got a Intel Ivy Bridge based laptop. Generally I'm quite satisfied with it. Of course installed latest Ubuntu on it. First problem was EFI boot and BIOS had no other options. Best way to work around it was to use EFI aware grub2. I wanted to keep the preinstalled Windows 7 there for couple of things, so needed dual boot.

In the end all I needed to do was to install Grub2 to EFI boot parition (/dev/sda1 on my case) and create the grub.efi binary under that. Then just copy /boot/grub/grub.cfg under it as well. On BIOS set up new boot label to boot \EFI\grub\grub.efi

After using the system couple of days found out random crashes. The system totally hanged. Finally traced the problem to HD4000 graphics driver: http://partiallysanedeveloper.blogspot.fi/2012/05/ivy-bridge-hd4000-linux-freeze.html

Needed to update Kernel. But which one? After multiple tries, I took the "latest" and "shiniest" one: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.4-precise/. With that kernel I got almost all the functionality and stability I needed.

However one BIG problem: headphones. I got sound normally from the speakers but after plugging in the headphones I got nothing. This problem seems to be on almost all the kernels I tried. Then I somehow figured out a important thing related to this. When I boot with headphone plugged in I got no sound from them. When I boot WITHOUT headphones plugged then they work just fine. Of course I debugged this problem all the time with the headphones plugged in and newer noticed this could be some weird detection problem. Since I kind of found solution for this I didn't bother to google it down. And of course Canonical does not provide support for unsupported kernels. If I remember correctly with the original Ubuntu 12.04 kernel this worked, but the HD4000 problem is on my scale bigger one than remember to boot without plugging anything to the 3.5" jack....

Of course my hopes are on 12.10 and don't want to dig it deeper, just wanted to inform you about this one.

## July 04, 2012

### Ville-Pekka Vainio

#### SSD TRIM/discard on Fedora 17 with encypted partitions

I have not blogged for a while, now that I am on summer holiday and got a new laptop I finally have something to blog about. I got a Thinkpad T430 and installed a Samsung SSD 830 myself. The 830 is not actually the best choice for a Linux user because you can only download firmware updates with a Windows tool. The tool does let you make a bootable FreeDOS USB disk with which you can apply the update, so you can use a Windows system to download the update and apply it just fine on a Linux system. The reason I got this SSD is that it is 7 mm in height and fits into the T430 without removing any spacers.

I installed Fedora 17 on the laptop and selected drive encryption in the Anaconda installer. I used ext4 and did not use LVM, I do not think it would be of much use on a laptop. After the installation I discovered that Fedora 17 does not enable SSD TRIM/discard automatically. That is probably a good default, apparently all SSDs do not support it. When you have ext4 partitions encrypted with LUKS as Anaconda does it, you need to change two files and regenerate your initramfs to enable TRIM.

First, edit your /etc/fstab and add discard to each ext4 mount. Here is an example of my root mount:
/dev/mapper/luks-secret-id-here / ext4 defaults,discard 1 1

Second, edit your /etc/crypttab and add allow-discards to each line to allow the dmcrypt layer to pass TRIM requests to the disk. Here is an example:
luks-secret-id-here UUID=uuid-here none allow-discards

You need at least dracut-018-78.git20120622.fc17 for this to work, which you should already have on an up-to-date Fedora 17.

Third, regenerate your initramfs by doing dracut -f. You may want to take a backup of the old initramfs file in /boot but then again, real hackers do not make backups .

Fourth, reboot and check with cryptsetup status luks-secret-id-here and mount that your file systems actually use discard now.

Please note that apparently enabling TRIM on encrypted file systems may reveal unencrypted data.

## April 29, 2012

### Miia Ranta

#### Viglen MPC-L from Xubuntu 10.04 LTS to Debian stable

With Ubuntu not supplying a kernel suitable for the CPU (a Geode GX2 by National Semiconductors, a 486 buzzing at 399MHz clock rate) of my Viglen MPC-L (the one Duncan documented the installation of Xubuntu in 2010), it was time to look for other alternatives. I wasn’t too keen on the idea of using some random repository to get the suitable kernel for newer version of Ubuntu, so Debian was the next best thing that came to mind.

Friday night, right before heading out to pub with friends, I sat on the couch, armed with a laptop, USB keyboard, RGB cable and a USB memory stick. Trial and error reminded me to

1. use bittorrent to download the image since our flaky Belkin-powered Wifi cuts off the connection every few minutes and thus corrupts direct downloads, and
2. do the boot script magic of pnpbios=off noapic acpi=off like with our earlier Xubuntu installation.

In contrast to the experience of installing Xubuntu on the Viglen MPC-L, the Debian installation was easy from here on. The installer seemed to not only detect the needed kernel and install the correct one (Linux wizzle 2.6.32-5-486 #1 Mon Mar 26 04:36:28 UTC 2012 i586 GNU/Linux) but, judging from the success of the first reboot after the installation had finished and a quick look at /boot/grub/grub.cfg, had also set the right boot options automatically. So the basic setup was a *lot* easier than it was with Xubuntu!

Some things that I’ve gotten used to being automatically installed with Ubuntu weren’t pre-installed with Debian and so I had to install them for my usage. Tasksel installed ssh server, but rsync, lshw and ntfs-3g needed to be installed as well which I had gotten used to having in Ubuntu, but installing them wasn’t too much of a chore. As I use my Viglen MPC-L as my main irssi shell nowadays, I had to install of course irssi, but some other stuff needed by it and my other usage patterns… so… after installing apt-file pastebinit zsh fail2ban for my pet peeves, and tmux irssi irssi-scripts libcrypt-blowfish-perl libcrypt-dh-perl libcrypt-openssl-bignum-perl libdbi-perl sqlite3 libdbd-sqlite3-perl I finally have approximately the system I needed.

All in all, the experience was a lot easier than what I had with Xubuntu in September 2010. It definitely surprised me and I kind of hope that this process wasn’t as easy and automated 18 months ago…

## January 27, 2012

### Aapo Rantalainen

#### Nokia Lumia 800 for Linux-developer

I got my Nokia Lumia 800 (Windows 7 -phone) from Nokia, and I consider myself as Linux-developer.

I attached Lumia phone to my computer and nothing happened. Went to discussion forum and learned there are no way to access phone via Linux. End of story (that was not long story).

## January 24, 2012

### Sakari Bergen

#### WhiteSpace faces in Emacs 23

This is a good old case of RTFM, but since I spent a couple of hours figuring it out, I thought I’d blog about it anyway…

The WhiteSpace package in Emacs allows you to visualize whitespace in your code. The overall settings of the package are controlled with the ‘whitespace-style’ variable. Before Emacs 23 you didn’t need to include the ‘face’ option to make different faces work. However, since Emacs 23 you need to have it set.

Now I can keep obsessing about whitespace with an up-to-date version of Emacs, and maybe publicly posting stuff like this will help me remember to RTFM in the future also :)

## January 09, 2012

### Sakari Bergen

The idea for this all started with someone mentioning

it’d be good if there was some magic thing which did some SSH voodoo to get you a shell that the person on the other end could watch

So, I took a quick look around and noticed that Screen can already do multiuser sessions, which do exactly this. However, controlling the session requires writing commands to screen, which is both relatively complex for beginners and relatively slow if the remote user is typing in ‘rm -Rf *’ ;)

So, I created a wizard-like python script, which sets up a multiuser screen session and a simple one button GUI (using PyGTK) for allowing and disallowing the remote user access to the session. It also optionally creates a script which makes it easier for the remote user to attach to the session.

Known issues:

• The helper script creation process for the remote user does not check the user input and runs sudo. Even though the script warns the user, it’s still a potential security risk
• If the script is terminated unexpectedly, the screen session will stay alive, and will need to be closed manually before this script will work again

### Resolving the issues?

Fixing the security issue would be just a matter of more work. However, the lingering screens are a whole different problem: I tried to find out a way to get the pid for the screen session, but failed to find a way to do this in python. This would have made the lingering screen sessions less harmful, as all the communication could have been done with <pid>.<session> instead of simply <session>, which it uses now. The subprocess.Popen object contains the pid of the launched process, but the actual screen session is a child of this process, and thus has a different pid. If anyone can point me toward a solution to this, it’d be greatly appreciated!

## January 03, 2012

### Sakari Bergen

#### New site up!

I finally got the work done, and here’s the result! I moved from Dupal to WordPress, as it feels better for my needs. So far I’ve enjoyed it more than Drupal.

I didn’t keep all of the content from my old site: I recreated most of it and added some new content. I also went through links to my site with Google’s Webmaster Tools, and added redirects to urls which are linked to from other sites (and resurrected one blog post).

It’s been a while since I did any PHP, HTML or CSS. I almost got frustrated for a moment, but after reading this article, things progressed much easier. Thanks to the author, Andrew Tetlaw! I was also inspired by David Robillard’s site, which is mostly based on the Barthelme theme. However, I started out with Automattic’s Toolbox theme, customizing most of it.

If you find something that looks or feels strange, please comment!

## December 28, 2011

### Aapo Rantalainen

#### Joulun hyväntekeväisyyslahjoituskohteet

Jouluhan on hyvää aikaa lahjoittaa rahaa hyväntekeväisyyteen, eikös juu. Tässä pari vinkkiä niille jotka haluavat helposti PayPalilla osallistua hyvän tekoon.

# Wikipedia

Kukapa ei tietäisi Wikipediaa, mutta tietääkö kaikki, että sen takana on aika pieni säätiö. Esim Googlella on noin miljoona palvelinkonetta, Wikipedialla 679. Esim Yahoolla työskentelee 13000 työntekijää, Wikipedialla 95.

https://wikimediafoundation.org/wiki/Donate

# Free Software Foundation

Säätiön nimessä oleva ‘free’ ei tarkoita ilmaista, vaan ‘vapautta’. Ohjelmiston vapaus tarkoittaa:

-lupa käyttää sitä miten tahansa
-lupa tutkia miten se toimii ja kuinka se on tehty
-lupa muuttaa sen toimintaa (eli korjata tai parantaa sitä)
-lupa kopioida sitä toisille, muutettuna tai muuttamattomana

Free Software on aate, joka kehottaa Sinuakin miettimään: “kuvittele maailma, jossa kaikki ohjelmistot ovat vapaita.” Ovatko Sinun käyttämäsi ohjelmat vapaita?

https://my.fsf.org/donate

## December 05, 2011

### Aapo Rantalainen

#### MeeGo on ExoPC

Even Ubuntu runs very well on ExoPC (last post), I had promised to return it with MeeGo, so here we go…

Downalod Latest image (meego-tablet-ia32-pinetrail-1.2.0.90.12.20110809.2.img) from http://repo.meego.com/MeeGo/builds/1.2.0.90/1.2.0.90.12.20110809.2/images/meego-tablet-ia32-pinetrail/

Copy to usb-stick and booting ExoPC from stick. Yes,ok,ok,ok,ok and ok. Boot. Ready.

I wanted some challenge, so I decided to compile and run JamMo. Easy as with Ubuntu (upgraded manual). Game uses fixed size window 800×480, so it would be handy to change resolution of ExoPC. Xrandr left black borders to left and right, but touchscreen is still using whole screen (so elements on middle of the screen are accessible normally, but elements near left and right borders are not).

Solution (partial): Add new screen-mode and use it.

run

cvt 840 480


And it gives: “840x480_60.00″   31.25  840 864 944 1048  480 483 493 500 -hsync +vsync

Run (add these to the autorun, they are cleared at every boot):

xrandr --newmode  "840x480_60.00"   31.25  840 864 944 1048  480 483 493 500 -hsync +vsync


And when you want use that resolution, run

xrandr --output LVDS1 --mode 840x480_60.00


There are still little black, but doesn’t affect usage (width must be multiple of 8, you can test if 848 is better…)

Issues:
*Task switcher is still in old middle of the screen
*Coming back might cause half of the screen be black (this is corrected after screen dim)
*Browser might rotate itself to portrait mode (even it is first started landscaped)

## November 06, 2011

### Miia Ranta

#### Ubuntu 11.10 on an ExoPC/Wetab, or how I found some use for my tablet and learnt to hate on-screen keyboards

I attended an event in the spring that ended with a miraculous incident of being given an ExoPC to use. The operating system that it came installed with was a bit painful to use (and I’m not talking about a Microsoft product), so I didn’t find too much use for the device. I flashed it with a new operating system image quite often, only to note that none to few problems were ever fixed in the UI. Since operating system project is pretty much dead now with participants moving to new areas and projects of interest, I decided to bite the bullet and flash my device with the newest Ubuntu.

Installation project requires an USB memory stick made into an installation media with the tools shipped with regular Ubuntu. A keyboard is also nice to have to make installation process feasible in the first place, or at least it makes it much less painful experience. After the system is installed, comes the pain of getting the hardware to play nice. Surprisingly I’ve had no other problems than trying to figure out how to make the device and operating system to realise that I want to scroll or right-click with my fingers instead of a mouse. Almost all the previous instructions I’ve come across involve (at best) Ubuntu 11.04 and a 2.6.x kernel – and the rest fail to give a detailed instruction on how to make the scrolling or right-clicking work with evdev. The whole process is very frustrating, and I still haven’t figured everything out.

Anyway. First thing you notice, especially without the fingerscrolling working, is that the new scrollbars are a royal pain in the hiney. The problem isn’t as bad in places where the problem can be bypassed, like in Chromium with the help of an extension called chromeTouch where the fingerscrolling can be set to work, or in Gnome-shell which actually has a decent sized scrollbar, or uninstalling overlay-scrollbar altogether, which isn’t pretty, but it works.

The second immediate thing that slaps a cold wet towel on the face is – after you’ve unplugged the USB keyboard – is the virtual keyboards. Ubuntu and its default environment Unity use OnBoard as the default on-screen keyboard. OnBoard is a complete keyboard with (almost) all the keys a normal keyboard would have, but it lacks a few features that are needed on a tablet computer: it lacks automation of hiding and unhiding itself. In addition to this annoyance OnBoard had the tendency of swapping the keyboard layout to what I assume to be either US or British instead of the Finnish one I had set as default on the installation. One huge problem with OnBoard is at least in my use that it ends up being underneath the Unity interface, where it’s next to useless.

I tried to install other virtual keyboards, like Maliit and Florence, but instructions and packages on Oneiric are lacking and anyway, I still don’t know how to change the virtual keyboard from OnBoard to something else. However, the virtual keyboard in a normal Gnome 3 session with Gnome-Shell seems to work more like the virtual keyboards should, but alas, it doesn’t seem to recognize the keyboard layout settings at all and thus I’m stuck to non-Finnish keyboard layout.

However among all these problems Ubuntu 11.10 manages to show great potential with both Unity and Gnome 3. Ubuntu messaging menu is nice, once gmnotify has been installed (as I use Chromium application Offline Gmail as my email client), empathy set up, music application of choice filled with music and browser settings synchronized.

I’ve found that the webcam works perfectly and the video call quality is much better than it has been earlier on my laptop where I’ve resorted into using GMails video call feature, because it Just Works. It’s nice to see that pulseaudio delivers and bluetooth audio works 100% with both empathy video calls and stereo music/video content.

Having read of the plans for future Ubuntu releases from blogposts of people who were attending UDS-P in Orlando this past week, I openly welcome our future tablet overlords. Ubuntu on tablets needs love and it’s nice to know it’s coming up. This all bodes well for my plan to take over the world with Ubuntu tablet, screen, emacs and chromium :-)

## October 29, 2011

### Ville-Pekka Vainio

#### Getting Hauppauge WinTV-Nova-TD-500 working with VDR 1.6.0 and Fedora 16

The Hauppauge WinTV-Nova-TD-500 is a nice dual tuner DVB-T PCI card (well, actually it’s a PCI-USB thing and the system sees it as a USB device). It works out-of-the-box with the upcoming Fedora 16. It needs a firmware, but that’s available by default in the linux-firmware package.

However, when using the Nova-TD-500 with VDR a couple of settings need to be tweaked or the signal will eventually disappear for some reason. The logs (typically /var/log/messages in Fedora) will have something like this in them:
vdr: [pidnumber] PES packet shortened to n bytes (expected: m bytes)
Maybe the drivers or the firmware have a bug which is only triggered by VDR. This problem can be fixed by tweaking VDR’s EPG scanning settings. I’ll post the settings here in case someone is experiencing the same problems. These go into /etc/vdr/setup.conf in Fedora:

EPGBugfixLevel = 0
EPGLinger = 0
EPGScanTimeout = 0

It is my understanding that these settings will disable all EPG scanning which is done in the background and VDR will only scan the EPGs of the channels on the transmitters it is currently tuned to. In Finland, most of the interesting free-to-air channels are on two transmitters and the Nova-TD-500 has two tuners, so in practice this should not cause much problems with outdated EPG data.

## August 25, 2011

### Miia Ranta

#### Things I learnt about managing people while being a Wikipedia admin

Just over four years ago I gave up my volunteer, unpaid role as an administrator of the Finnish Wikipedia. Today, while discussing with a friend, I realised what has been one of the most valuable lessons in both my professional life and hobbies. While I am quite pessimistic in general, I still benefit from these little nuggets of positive insight almost every day when communicating and working with other people.

• Assume Good Faith. “Unless there is clear evidence to the contrary, assume that people who work on the project are trying to help it, not hurt it.” Most people aren’t your enemies. Most people will not try to hurt you. If stupidity is abound, it’s (usually) not meant as a personal attack towards you, nor is it intentional.
• When someone does something that doesn’t immediately make sense, which contradicts your assumptions about the skills and common sense of a person you are dealing with, discuss it with them! Don’t make assumptions based on partial information or details, ask for more info so you don’t need to assume the worst! If something is unclear, asking won’t make things worse.

Pessimists are never disappointed, only positively surprised. But while the world seems like a dark a desolate place and the humanity seems to be doomed, I still have to try to believe in the sensibility of people and that we can make something special for the project we are trying to work for. Ubuntu, Wikipedia, Life… or just your day-to-day job.

## August 21, 2011

### Miia Ranta

#### And then, unexpectedly, life happens

I hope none of you have expected me to blog more often. It’s been over a year since I’ve last blogged, and so much has happened since I last did.

I’ve travelled to Cornwall, started a Facebook page that got a huge following in no time, fiddled a bit with CMS Made Simple at work, bought another Nexus One to replace one that broke and after getting the broke one fixed, gave the extra to my sister as a Christmas present, have taught Duncan how to make gravadlax and crimp Carelian pasties, visited Berlin and bought a game. I’ve attended a few geeky events, like Local MeeGo Network meetings of Tampere, Finland, MeeGo Summit also in Tampere, MeeGo Conference in San Francisco, US and OggCamp’11 in Farnham, UK.

I’ve also taken few steps in learning to code in QML, poked around Arduino and bought a new camera, Olympus Pen E-PL1.

What else has happened? Well, among other things, my mother was diagnosed with cholangiocarcinoma right after New Year, and she passed away 30th of June.

Many things that I have taken for granted have changed or gone away forever. Importance of some things have changed as my life is trying to find a new path to run in.

Blogging and some of my Open Source related activities have taken a toll, which I am planning to fix now that I feel like I’m strong enough to use my energy on these hobbies again. Sorry for the hiatus, folks.

Coming up, perhaps in the near future:

• Rants and Raves about Arduino
• Entries about social networking sites
• Camera/Photography jabber
• Mobile phone/Tablet chatter

So, just so you know, I’m alive, and will soon be in an RSS feed reader near you. AGAIN.

## August 06, 2011

### Ville-Pekka Vainio

#### The Linux/FLOSS Booth at Assembly Summer 2011

The Assembly Summer 2011 demo party / computer festival is happening this weekend in Helsinki, Finland. The Linux/FLOSS booth here is organized together by Finnish Linux User Group, Ubuntu Finland, MeeGo Network Finland and, of course, Fedora. I’m here representing Fedora as a Fedora Ambassador and handing out Fedora DVDs. Here are a couple of pictures of the booth.

The booth is mostly Ubuntu-coloured because most of the people here are members of Ubuntu Finland and Ubuntu in general has a large community in Finland. In addition to live CDs/DVDs, the MeeGo people also brought two tablets running MeeGo (I think they are both ExoPCs) and a few Nokia N950s. They are also handing out MeeGo t-shirts.

People seem to like the new multi-desktop, multi-architecture live DVDs that the European Ambassadors have produced. I think they are a great idea and worth the extra cost compared to the traditional live CDs.

## April 29, 2011

### Sakari Bergen

#### Website remake coming up, comments disabled

The format of my current website has not worked very well for me, and I'm a bit lazy with bloggy stuff. So, I decided to remake this website. I've already made a new design, and will probably be switching to Wordpress from Drupal because it's a bit simpler. Hope to get the new site up in a few months latest!

Due to a lot of spam recently, I disabled comments.

## March 21, 2011

### Jouni Roivas

#### Wayland

Recently Wayland have become a hot topic. Canonical has announced that Ubuntu will go to Wayland. Also MeeGo has great interest on it.

Qt has had (experimental) Wayland client support for some time now.

A very new thing is support for Qt as Wayland server. With that one can easily make own Qt based Wayland compositor. This is huge. Since this the only working Wayland compositor has been under wayland-demos. Using Qt for this opens many opportunities.

My vision is that Wayland is the future. And the future might be there sooner than you think...

## February 24, 2011

### Jouni Roivas

#### MeeGo status

Our CEO started a blog: http://cannedbypasi.blogspot.com/

He wrote first entry about MeeGo and Qt status.

Shortly: MeeGo is alive and kicking.

## January 03, 2011

### Ville-Pekka Vainio

#### Running Linux on a Lenovo Ideapad S12, part 2

Here’s the first post of what seems to be a series of posts now.

## acer-wmi

I wrote about acer-wmi being loaded on this netbook to the kernel’s platform-driver-x86 mailing list. That resulted in Chun-Yi Lee writing a patch which adds the S12 to the acer-wmi blacklist. Here’s the bug report.

I did a bit of googling on the ideapad-laptop module and noticed that Ike Panhc had written a series of patches which enable a few more of the Fn keys on the S12. The git repository for those patches is here. Those patches are also in linux-next already.

So, I cloned Linus’ master git tree, applied the acer-wmi patch and then git pulled Ike’s repo. Then I followed these instructions, expect that now Fedora’s sources are in git, so you need to do something like fedpkg co kernel;cd kernel;fedpkg prep and then find the suitable config file for you. Now I have a kernel which works pretty well on this system, except for the scheduling/sleep issue mentioned in the previous post.

## December 27, 2010

### Ville-Pekka Vainio

#### Running Linux (Fedora) on a Lenovo Ideapad S12

I got a Lenovo Ideapad S12 netbook (the version which has Intel’s CPU and GPU) a few months ago. It requires a couple of quirks to work with Linux, I’ll write about them here, in case they’ll be useful to someone else as well.

## Wireless

The netbook has a “Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01)” wifi chip. It works with the “b43″ open source driver, which is in the kernel. However, I think that it may not actually reach the speeds it should. You could also use the proprietary “wl” kernel module, available in RPM Fusion as “kmod-wl”, but I don’t like to use closed source drivers myself.

The b43 driver needs the proprietary firmware from Broadcom to work with the 4312 chip. Following these instructions should get you the firmware.

## Kernel

The kernel needs the “nolapic_timer” parameter to work well with the netbook. If that parameter is not used, it seems like the netbook will easily sleep a bit too deep. Initially people thought that the problem was in the “intel_idle” driver, the whole thing is discussed in this bug report. However, according to my testing, the problem with intel_idle was fixed, but the netbook still has problems, they are just a bit more subtle. The netbook boots fine, but when playing music, the system will easily start playing the same sample over and over again, if the keyboard or the mouse are not being used for a while. Apparently the system enters some sort of sleeping state. I built a vanilla kernel without intel_idle and I’m seeing this problem with it as well.

Then there’s “acer-wmi”. The module gets loaded by the kernel and in older versions it was probably somewhat necessary, because it handled the wifi/bluetooth hardware killswitch. It causes problems with NetworkManager, though. It disables the wifi chip on boot and you have to enable wifi from the NetworkManager applet by hand. Here’s my bug report, which hasn’t gotten any attention, but then again, I may have filed it under the wrong component. Anyway, in the 2.6.37 series of kernels there is the “ideapad_laptop” module, which apparently handles the hardware killswitch, so acer-wmi shouldn’t be needed any more and can be blacklisted.

## November 29, 2010

### Jouni Roivas

#### Encrypted rootfs on MeeGo 1.1 netbook

I promised my scripts to encrypt the rootfs on my Lenovo Ideapad running MeeGo 1.1. It's currently just a dirty hack but thought it could be nice to share it with you.

My scripts uses cryptoloop. Unfortunately MeeGo 1.1 netbook stock kernel didn't support md_crypt so that was a no go. Of course I could compile the module myself but I wanted out-of-the box solution.

Basic idea is to create custom initrd and use it. My solution needs Live USB stick to boot and do the magic. Also another USB drive is needed to get the current root filesystem in safe while encrypting the partition. I don't know if it's possible to encrypt "in place" meaning to use two loopback devices. However this is the safe solution.

For the busy ones, just boot the MeeGo 1.1 Live USB and grab these files:
http://kaaos.huutonauru.net/meego/netbook_rootfs_crypt/crypt_hd.shhttp://kaaos.huutonauru.net/meego/netbook_rootfs_crypt/mkcryptrd.sh

Then:
chmod a+x crypt_hd.sh mkcryptrd.shsu./crypt_hd.sh

The ones who have more time and want to double check everything, please follow instructions at: http://kaaos.huutonauru.net/meego/netbook_rootfs_crypt/README

This solution has at least one drawback. Once the kernel updates you have to recreate the initrd. For that purposes I created a tiny script than can be run after kernel update:
http://kaaos.huutonauru.net/meego/netbook_rootfs_crypt/update_initrd.sh

That script also needs the mkcryptrd.sh script above.

Of course that may break your system at any time, so be warned.

For my Lenovo Ideapad S10-3t and MeeGo 1.1 netbook it worked fine. My test case was to make very fresh installation first from the Live/installation USB. Boot again and setup the cryptoloop from Live USB. After that I could easily boot my crypted MeeGo 1.1. It asks password in very early phase of boot process. After it's written correctly the MeeGo 1.1 system should boot up normally.

This worked for me, and I give no guarantee that this works for you. However you're welcome to send patches and improvements.

UPDATE 29.11.2010:
Some people have reported problems when they have different kernel version than on Live USB. The're unable to boot back to their system. I try to figure out solution for this issue.

## June 14, 2010

### Miia Ranta

#### California Dreamin’, release 1.2.1 (LCS2010, MeeGo workshop videos)

As promised earlier, I’ve now published four of the sessions from Linux Collaboration Summit 2010 which was held in San Francisco in April. They’re viewable in blip.tv, and I’ve decided to follow the licensing Linux Foundation itself has for the videos of the previous day, so the videos are licensed in CreativeCommons Attribution. I managed to burn a lot of time to edit the videos, but I guess in the end they’re fairly good. The sound quality isn’t magnificent, but most of the time you can tell what is actually said… I’ve not yet uploaded the MeeGo question hour or the panel, because I’m not still quite convinced that the sound quality is good enough. If you want them on blip.tv, please leave a comment.

Without further ado, here are the episodes so far:

<3 <3

## April 29, 2010

### Matti Saastamoinen

#### Ubuntu 10.04 LTS ja Tampereen julkaisutapahtuma

Ubuntu 10.04 LTS julkaistaan tänään 29.4. Kyseessä on ns. LTS-julkaisu (Long Term Support), johon tarjotaan maksuttomat tietoturva- ja huoltopäivitykset kolmen vuoden ajan työpöytäversioon ja viiden vuoden ajan palvelinversioon. Ubuntusta julkaistaan kahden vuoden välein tällainen LTS-versio ja puolen vuoden välein kokeellisemmat väliversiot. Ubuntu Suomen julkaisema lehdistötiedote kertoo 10.04-version oleellisimmat uudistukset.

Aina uuden Ubuntun julkaisun yhteydessä ympäri maailmaa järjestetään julkaisutapahtumia. LTS-julkaisut keräävät lisäksi erityishuomiota ja -panostusta. Ubuntu 10.04:n pääjulkaisutapahtuma järjestetään Suomen avoimen lähdekoodin keskus COSSin ja Ixonos Oyj:n toimesta Tampereella 5.5. klo 15-19. Tapahtumapaikkana toimii Finlaysonilla sijaitsevaan Demola. Julkaisutapahtumia järjestetään Tampereen lisäksi varmuudella perjantaina 30.4. Oulussa sekä lauantaina 15.5. Porissa.

Tampereen tapahtuman suosio on yllättänyt varmasti kaikki, myös meidät järjestäjät. Tapahtumaan on ilmoittautunut jo lähes 150 henkilöä, joten yksin ei tarvitse paikalle saapuvien olla! Tapahtuma koostuu muutamasta Ubuntua käsittelevästä esityksestä, ruokailusta ja yleisestä seurustelusta sekä tietysti Ubuntun esittelystä ja ihmettelystä. Paikalle tuodaan läjäpäin Ubuntu 10.04 LTS Finnish Remix -romppuja mukaan otettaviksi ja osallistujien kesken arvotaan vielä Nokia N900.

Ixonos on ollut kiitettävän aktiivinen tapahtuman tukemisessa ja alkuperäinen ideakin Tampereen julkaisutapahtumasta tuli yritykseltä. Tilaisuudessa kuullaan tarkemmin, kuinka Ubuntua hyödynnetään Ixonosissa ja miksi se on tärkeä yritykselle. Ilmoittautuneiden joukossa onkin ilahduttavan paljon eri yritysten edustajia.

Tapahtumaan mahtuu vielä mukaan ja ilmoittautuminen suljetaan pe 30.4. klo 12. Istumapaikkoja tuskin riittää kaikille, joten kannattaa tulla paikalle ajoissa, mikäli haluaa istumaan esitysten ajaksi.

Ohjelma ja ilmoittautuminen sivulla http://www.coss.fi/ubuntufest.