blogit.vapaasuomi.fi

November 15, 2014

Viikon VALO

4x46 Vokoscreen - Viikon VALO #202

Vokoscreen on helppo ja selkeä työkalu ruutukaappausvideoiden tekemiseen Linux-alustalla.
valo202-vokoscreen.png Vokoscreenillä ruutukaappausvideoiden tekeminen onnistuu erittäin helposti. Ohjelman käyttöliittymä on selkeä, mutta siitä löytyy silti tarpeellisimmat ominaisuudet ja toiminnot. Ruutukaappausvideot ovat opetuksessa varsin hyödyllisiä. Niitä voidaan käyttää joko jonkin sovellusohjelman käytön opettamiseen tai muun tietokoneella tapahtuvan opetuksen tallentamiseen.

Ohjelman asetukset on jaoteltu välilehdille selkeinä ryhminä. Ensimmäisellä välilehdellä valitaan kaapattavan alueen rajaus, webkameran kuvan näyttäminen ruudulla sekä suurennuslasitoiminnon käyttö. Toiselta välilehdeltä löytyvät ääniasetukset, joista valitaan käytetäänkö äänien hallintaan PulseAudiota vai Alsaa sekä käytettävä äänilaite. Kolmas välilehti sisältää videoasetukset. Valittavissa ovat kuvatiheys, videon tallennusmuoto sekä näytetäänkö videolla hiiren osoitin vai ei. Neljäs välilehti sisältää muita asetuksia, joihin kuuluvat videotiedoston tallennuspaikka, esikatseluun käytettävän videosoittimen valinta sekä Vokoscreen-ohjelman näyttämiseen liittyvät asetukset.

Kaappausalueen rajaamiseen ohjelmassa on kolme vaihtoehtoa: koko ruutu (fullscreen), ikkuna (window) sekä alue (area). Koko ruudun kaappaus rajoittuu monen näytön järjestelmässä vain ensisijaiseen näyttöön, mikä onkin hyvä ratkaisu. Jos alueeksi valitaan ikkuna, ohjelma rajaa kaappaamisen valitun ikkunan reunojen mukaan ja kaikki tällä alueella näkyvät muutkin ikkunat näkyvät videossa. Jos ikkunaa siirretään kesken kaappauksen, myös kaappausalue siirtyy vastaamaan ikkunan uutta paikkaa. Alueen kaappaamisessa käyttäjä voi itse rajata näytöltä haluamansa alueen.

Ohjelma tukee webkameran kuvan näyttämistä kaapattavan alueen päällä. Näin kaapattavaan videoon saadaan myös esiintyjä näkyville helposti. Webkameran kuvan koko on valittavissa hiiren kakkospainikkeen takaa tulevasta valikosta. Samoin kaapattavaan videoon voidaan ottaa mikrofonin kautta mukaan myös ääni. Käytettävä mikrofoni voidaan valita äänivälilehden asetuksista.

Erityisesti opetusvideoissa voi olla hyödyllistä korostaa esimerkiksi joitain painikkeita tai valikon valintoja. Tällöin voidaan hyödyntää suurennuslasitoimintoa, joka näyttää hiiren osoittimen alla olevan alueen sen vieressä suurennettuna.

Itse kaappaukseen käytettävä käyttöliittymä on varsin selkeä ja koostuu vain muutamasta nappulasta. "Start" nappula käynnistää videon kaappaamisen, "Stop" pysäyttää sen ja "Pause" mahdollistaa väliaikaisen pysäyttämisen ja kaappaamisen jatkamisen samaan tiedostoon. "Play" nappula avaa kaapatun tiedoston valittuun esikatseluohjelmaan, esimerkiksi VLC-mediasoittimeen. Lisäksi käytettävissä on "Send"-painike, jolla videon voi lähettää sähköpostilla. Vokoscreen näyttää myös työpöydän tehtäväpalkissa painikkeet "Start"-, "Stop"- ja "Pause"-toiminnoille, jolloin Vokoscreenin oma ikkuna voi olla kaappauksen aikana minimoituna.

Ohjelman asetuksista kannattaa kokeilla, mitkä säädöt sopivat parhaiten käytettävälle laitteelle. Esimerkiksi kokeilulaitteella sopiva kuvatiheys näytti olevan 15 kuvaa sekunnissa. Lisäksi kokeilulaitteella PulseAudion käyttö näytti aiheuttavan ongelmia äänen ja kuvan kohdistuksessa, mutta Alsan käyttäminen toimi hyvin.

Vokoscreen käyttää videokaappaukseen ja tallentamiseen komentoriviohjelmaa nimeltä avconv. Komentoriviltä käynnistettynä Vokoscreen tulostaakin näkyviin käytetyn avconv-komennon komentoriviparametreineen. Osaava käyttäjä voi halutessaan kopioida tuon komennon ja hyödyntää sitä myös ilman Vokoscreeniä esimerkiksi omassa skriptissä.

Kotisivu
http://www.kohaupt-online.de/hp/
Lähdekoodi
https://github.com/vkohaupt/vokoscreen
Lisenssi
GNU GPL v.2
Toimii seuraavilla alustoilla
Linux
Asennus
Ohjelma löytyy suoraan ainakin Ubuntun sekä Debianin Jessie-version pakettivalikoimista. Vokoscreenin omilta kotisivuilta voi myös ladata zip-paketin, johon on paketoitu versiot useammalle Debianin, Ubuntun sekä OpenSusen versioille.

Teksti: Pesasa
Kuvakaappaukset: Pesasa

by pesasa at November 15, 2014 09:17 PM

November 08, 2014

Viikon VALO

4x45 QRemoteControl - Viikon VALO #201

QRemoteControl on ohjelmisto, jolla mobiililaitetta voi käyttää tietokoneen kaukosäätimenä langattomalla verkkoyhteydellä.
valo201-qremotecontrol.png QRemoteControl on palvelin- ja asiakasohjelmien pari, jonka asentamalla mobiililaite muuttuu näppäräksi kaukosäätimeksi, jolla voi ohjata vaikka mediasoittimena tai esityskoneena toimivaa tietokonetta. Palvelinohjelma löytyy Windows-, Mac OS X- sekä Linux-alustoille ja asiakasohjelmaa voi käyttää Android-, iOS-, SailfishOS-, MeeGo-, Symbian- ja BlackBerry-laitteilla, joten laitetuki on varsin kattava.

Mobiililaitteeseen asennettavan kaukosäädinsovelluksen käyttöliittymä jakautuu neljään näkymään: kaukosäädin, omat kuvakkeet, hiiri ja näppäimistö. Kaukosäädinnäkymä on erityisesti media- ja esityskäyttöön soveltuva, sillä se sisältää valmiiksi näissä käytöissä yleisimmin käytettävät painikkeet, kuten toisto- ja pysätysnappulat, äänenvoimakkuuden säädön sekä nuolinäppäimet. Näppäimistönäkymä toimii tavallisena näppäimistönä, josta löytyy useimmin tarvittavat näppäimet. Ohjelman näppäimistön sijasta voi halutessaan käyttää myös mobiililaitteen omaa virtuaalinäppäimistöä. Hiirinäkymä toimii tavallisen kannettavan tietokoneen laattahiiren tavoin. Siinä on lähes koko näytön suuruinen kosketusalue, jonka reunassa ovat vierityspalkit, sekä hiiren kolme näppäintä. Lisäksi näkymässä ovat käytettävissä myös ctrl-, alt- ja shift-näppäimet, joita toisinaan tarvitaan yhtä aikaa hiiritoimintojen kanssa. Tämän näkymään kaikki napit ovat pohjaan tarttuvia, jolloin esimerkiksi raahaaminen on helppoa tätä hiirtä käyttäen. Neljännessä näkymässä on käytettävissä joukko käyttäjän itse luomia kuvakkeita, jotka suorittavat jonkin käyttäjän palvelinpuolella määrittelemän komennon tai toiminnon.

Parikseen mobiililaitteessa käytettävä asiakasohjelma tarvitsee ohjattavalla tietokoneella ajettavan palvelinohjelman, joka kuuntelee verkon kautta tulevia yhteydenottoja. Palvelinohjelman voi halutessaan laittaa käynnistymään automaattisesti sisäänkirjautumisen yhteydessä tai käynnistää sen itse vain tarvittaessa. Yhteydenotoille voi ja kannattaa määritellä salasana, joka tarvitaan yhteyden muodostamiseen. Lisäksi asetuksista voi säätää muun muassa hiiren ja näppäimistön herkkyyttä. Käyttäjä voi lisätä palvelimen asetusten kautta uusia mobiililaitteella näkyviä pikakuvakkeita komennoille, näppäinyhdistelmille tai komentojen yhdistelmille. Palvelimen asetuksista voi myös muokata mobiililaitteen kaukosäädinnäkymässä näytettävien nappuloiden tuottamia näppäinpainalluksia.

Kokeilluista asiakasohjelmista Android-versio tuntui toisinaan hieman nykivältä ja esimerkiksi hiiren käyttö ei ollut aivan sulavaa. Sen sijaan SailfishOS-alustalla kokeiltuna käyttö oli puolestaan erittäin sulavaa.

Kotisivu
http://qremote.org/
Lähdekoodi
Palvelin
Asiakas
Lisenssi
GNU GPL v.3+
Toimii seuraavilla alustoilla
Palvelin: Linux, Windows, Mac OS X
Asiakas: Android, iOS, SailfishOS, MeeGo, BlackBerry, Symbian
Asennus
Ohjeet ja linkit asennukseen löytyvät ohjelman kotisivuilta.
Linkkejä
QRemoteControl - Quick Demo (Youtube)
@qremote (Twitter)

Teksti: Pesasa
Kuvakaappaukset: Pesasa

by pesasa at November 08, 2014 01:13 PM

November 07, 2014

Riku Voipio

Adventures in setting up local lava service

Linaro uses LAVA as a tool to test variety of devices. So far I had not installed it myself, mostly due to assuming it to be enermously complex to set up. But thanks to Neil Williams work on packaging, installation has got a lot easier. Follow the Official Install Doc and Official install to debian Doc, roughly looking like:

1. Install Jessie into kvm


kvm -m 2048 -drive file=lava2.img,if=virtio -cdrom debian-testing-amd64-netinst.iso
2. Install lava-server

apt-get update; apt-get install -y postgresql nfs-kernel-server apache2
apt-get install lava-server
# answer debconf questions
a2dissite 000-default && a2ensite lava-server.conf
service apache2 reload
lava-server manage createsuperuser --username default --email=foo.bar@example.com
$EDITOR /etc/lava-dispatcher/lava-dispatcher.conf # make sure LAVA_SERVER_IP is right
That's the generic setup. Now you can point your browser to the IP address of the kvm machine, and log in with the default user and the password you made.

3 ... 1000 Each LAVA instance is site customized for the boards, network, serial ports, etc. In this example, I now add a single arndale board.


cp /usr/lib/python2.7/dist-packages/lava_dispatcher/default-config/lava-dispatcher/device-types/arndale.conf /etc/lava-dispatcher/device-types/
sudo /usr/share/lava-server/add_device.py -s arndale arndale-01 -t 7001
This generates us a almost usable config for the arndale. For site specifics I have usb-to-serial. Outside kvm, I provide access to serial ports using the following ser2net config:

7001:telnet:0:/dev/ttyUSB0:115200 8DATABITS NONE 1STOPBIT
7002:telnet:0:/dev/ttyUSB1:115200 8DATABITS NONE 1STOPBIT
TODO: make ser2net not run as root and ensure usb2serial devices always get same name..

For automatic power reset, I wanted something cheap, yet something that wouldn't require much soldering (I'm not a real embedded engineer.. I prefer software side ;) . Discussed with Hector, who hinted about prebuilt relay boxes. Chose one from Ebay, a kmtronic 8-port USB Relay. So now I have this cute boxed nonsense hack.

The USB relay is driven with a short script, hard-reset-1


stty -F /dev/ttyACM0 9600
echo -e '\xFF\x01\x00' > /dev/ttyACM0
sleep 1
echo -e '\xFF\x01\x01' > /dev/ttyACM0
Sidenote: If you don't have or want automated power relay for lava, you can always replace this this script with something along "mpg123 puny_human_press_the_power_button_now.mp3"

Both the serial port and reset script are on server with dns name aimless. So we take the /etc/lava-dispatcher/devices/arndale-01.conf that add_device.py created and make it look like:


device_type = arndale
hostname = arndale-01
connection_command = telnet aimless 7001
hard_reset_command = slogin lava@aimless -i /etc/lava-dispatcher/id_rsa /home/lava/hard-reset-1
Since in my case I'm only going to test with tftp/nfs boot, the arndale board needs only to be setup to have a u-boot bootloader ready on power-on.

Now everything is ready for a test job. I have a locally built kernel and device tree, and I export the directory using the httpd available by default in debian.. Python!


cd out/
python -m SimpleHTTPServer
Go to the lava web server, select api->tokens and create a new token. Next we add the token and use it to submit a job

$ sudo apt-get install lava-tool
$ lava-tool auth-add http://default@lava-server/RPC2/
$ lava-tool submit-job http://default@lava-server/RPC2/ lava_test.json
submitted as job id: 1
$
The first job should now be visible in the lava web frontend, in the scheduler -> jobs part. If everything goes fine, the relay will click in a moment and the job will finish in a few minutes.

by Riku Voipio (noreply@blogger.com) at November 07, 2014 09:03 AM

November 01, 2014

Viikon VALO

4x44 Bitcoin - Viikon VALO #200

Bitcoin on avoimeen lähdekoodiin perustuva digitaalinen valuutta.
valo200-bitcoin.png Bitcoin on matemaattisin kryptografisin algoritmein toteutettu avoimen lähdekoodin ohjelmilla ja periaatteilla toimiva digitaalinen valuutta. Bitcoinit eivät ole minkään keskuspankin liikkeelle laskemia eikä niillä suoritettavia maksuja välitetä minkään pankin kautta vaan suoraan käyttäjien välillä vertaisverkkoperiaatteella. Bitcoin on kryptografisen toteutuksensa ansiosta luonteeltaan turvallinen ja pseudonyymi. Bitcoineilla on helppoa suorittaa maksuja kansainvälisesti ja nopeasti. Bitcoineilla on mahdollista toteuttaa helppo maksaminen verkkopalveluiden yhteydessä ja vaikka pienten tippien antaminen. Sen heikkoja puolia ovat esimerkiksi sen vaihtokurssien vaihtelevuus sekä verotukselliset ongelmat.

Bitcoinin kehitti nimimerkillä Satoshi Nakamoto toiminut henkilö tai taho vuonna 2008. Yksi bitcoin, lyhenne (BTC), jakautuu tuhanteen millibitcoiniin (mBTC) ja se voidaan jakaa myös 100 miljoonaan pienempään satoshiksi kutsuttuun yksikköön. Bitcoineja tullaan laskemaan liikkeelle yhteensä enintään 21 miljoonaa kappaletta ja niitä on tällä hetkellä liikkeellä noin 13 miljoonaa. Bitcoinien liikkeellelasku tapahtuu geometrisena sarjana ja se tulee hidastumaan ajan mittaa. Tätä kirjoitettaessa yhden bitcoinin kurssi on noin 327 dollaria, eli noin 269 euroa.

Bitcoinin idea perustuu siihen, että kaikki niillä suoritetut maksut, eli transaktiot ("transaction"), ovat julkisia ja niitä säilytetään vertaisverkossa hajautetusti ylläpidetyssä julkisessa kirjanpidossa. Yksittäiset hyväksytyt transaktiot niputetaan lohkoiksi ("block"), joista aikajärjestyksessä rakennettu lohkojen ketju ("block chain") muodostaa bitcoinien julkisen kirjanpidon. Kukin lohko sisältää tarkistussumman, joka on laskettu sen omasta sisällöstä sekä ketjun edellisen lohkon tarkistussummasta. Hyväksyttävien tarkistussummien laskeminen lohkoille on tehty tarkoituksella työlääksi kryptografisin menetelmin. Näin varmistetaan kirjanpidon muuttumattomuus, sillä yksittäisen transaktion muuttaminen vaatisi sen sisältämän lohkon tarkistussumman laskemisen uudestaan. Jotta ketju pysyisi ehjänä, pitäisi laskea uudet tarkistussummat myös kaikille ketjun seuraaville lohkoille, sillä ne sisältävät tiedon edellisen lohkon tarkistussummasta. Lisäksi, koska käsitys oikeaksi hyväksytystä lohkoketjusta perustuu bitcoinvertaisverkossa siihen, mikä lohkoketjun versio on yleisimmin käytössä eli kasvanut pisimmäksi, pitäisi transaktioita väärentävän tahon saada käytännössä hallintaansa yli puolet transaktioiden vahvistuksia laskevasta laskentakapasiteetista. Tämä on varsin epätodennäköistä.

Uusien bitcoinyksiköiden luominen, louhiminen eli mainaus ("mining"), liittyy suoraan siihen, miten uudet bitcoineilla tapahtuvat maksut, transaktiot, hyväksytään ja liitetään osaksi lohkoketjua. Kun uusi transaktio suoritetaan, alkavat louhintaa suorittavat vertaisverkon koneet laskea sen vahvistamiseen tarvittavia tarkistussummia. Uuden lohkon ketjuun lisäämiseen tarvittava laskenta kestää noin 10 minuuttia ja tyypillisesti transaktio katsotaan varmistetuksi, kun ketjussa on transaktion oma lohko mukaan lukien vähintään kuusi uutta lohkoa. Tällöin transaktio on jo niin syvällä kirjanpidon historiassa, että sen muuttaminen, eli jo käytetyn rahan uudelleen käyttäminen ("double-spending"), katsotaan mahdottomaksi.

Maksujen lähettämiseen ja vastaanottamiseen bitcoinien käyttäjät käyttävät erilaisia bitcoinsovelluksia ja -lompakoita. Sovellukset voivat olla web-palveluihin pohjautuvia, jolloin lompakko on tallennettuna palvelun palvelimelle salasanalla suojattuna, taikka käyttäjän paikallisella tietokoneella tai mobiililaitteella olevia sovelluksia. Sovelluksen hallitsema lompakko on käytännössä luettelo julkisen avaimen kryptografiaan liittyviä salaisia avaimia. Näiden salaisten avainten hallussa pitäminen valtuuttaa käyttäjän käyttämään julkisessa kirjanpidossa näkyviä vielä käyttämättömiä ("unspent") varoja. Jokaista bitcointransaktiota varten vastaanottajalle luodaan uusi bitcoinosoite ("address"), johon maksu lähetetään ja jonka käytön vastaanottajan salainen avain oikeuttaa. Osoite on 26-34 merkkiä pitkä merkkijono, esimerkiksi: 1HUbRagBn18Qo5b7eS72H8WJJD2ByCcGFw. Lompakon saldo koostuu käytännössä kaikista julkisesta lohkoketjusta löytyvistä transaktioista yhteenlasketuista käyttämättömistä varoista, joiden käyttöön lompakossa olevilla avaimilla on oikeus.

Käyttämättömien varojen käyttämistä voidaan verrata tavallisesta lompakosta löytyvien rahojen käyttämiseen. Kun henkilön tavallisessa lompakossa on kaksi viiden euron seteliä, ovat ne tulleet hänelle tuloksena jostain aiemmasta maksutapahtumasta, esimerkiksi vaihtorahana jostain isommasta maksusta. Jos henkilö haluaa maksaa kuuden euron maksun, hän maksaa sen noilla kahdella viiden euron setelillä ja saa vaihtorahana esimerkiksi kaksi kahden euron kolikkoa. Vastaavasti bitcoineilla maksettaessa käytettävissä olevat varat ovat tulleet jostain aiemmasta maksutapahtumasta eli transaktiosta. Tällaisia varoja sanotaan transaktioiden käyttämättömiksi ulostuloiksi ("unspent output"). Käytetty ohjelma valitsee uuteen transaktioon sisääntuloksi ("input") automaattisesti käyttämättömiä ulostuloja riittävän bitcoinmäärän verran ja ulostuloksi kohdeosoitteen. Jos maksusta jää rahaa yli, loput rahat ohjataan vaihtorahana ulostuloon, jonka osoite on käyttäjän itse omistaman salaisen avaimen hallittavissa. Näin käytetyt ulostulot merkitään kirjanpitoon käytetyiksi ("spent output") ja transaktion vahvistamisen jälkeen lohkoketju estää niiden käytön uudelleen. Esimerkki bitcointransaktiosta: 0a1c0b1ec0ac55a45b1555202daf2e08419648096f5bcc4267898d420dffef87.

Lompakoita hallitsevia sovelluksia on lueteltu bitcoin.org-sivuston ohjeissa. Sovelluksia löytyy mobiililaitteille ja tietokoneille sekä palveluina verkossa. Lisäksi lompakkona voi käyttää erillisiä sitä varten suunniteltuja laitteita. Esimerkki tietokoneella toimivasta on alkuperäinen Bitcoin Core eli Bitcoin-qt. Android-laitteella helppokäyttöinen sovellus on Bitcoin Wallet, joka löytyy myös F-Droidista nimellä "Bitcoin". Yksinkertaisimmillaan bitcointransaktio tapahtuu lukemalla mobiilisovelluksella vastaanottajan esittämä QR-koodi, joka sisältää vastaanottajan osoitteen sekä summan. Verkkopalveluina käytettävät lompakkopalvelut ovat myös käyttökelpoisia ja helppoja käyttää useammalta laitteelta, mutta niiden käytöstä peritään tyypillisesti hyvin pieni välityspalkkio.

Koska bitcoinvarojen käyttöoikeus perustuu sähköiseen lompakkoon taltioituihin salaisiin avaimiin, on sanomattakin selvää, että lompakosta on pidettävä hyvää huolta. Lompakon katoaminen tarkoittaa, että kaikki sillä hallittavat varat ovat auttamattomasti menetettyjä ja kukaan ei pääse niihin enää käsiksi. Siksi onkin tärkeää varmuuskopioida säännöllisesti esimerkiksi puhelimeen asennetun bitcoinohjelman käyttämä lompakko ja säilyttää hyvässä tallessa jossain muussa paikassa kuin puhelimessa. Varmuuskopiota tehdessä lompakko tyypillisesti salataan, jolloin on tärkeää muistaa käytetty salauksen purkava salasana. Bitcoineja voi tarvittaessa tallettaa myös niin sanottuun paperilompakkoon, joka on fyysinen offline-dokumentti, jonka voi pitää tallessa kuin tavallisen käteisen rahan. Paperilompakko sisältää sisällään tiedon julkisesta osoitteesta ja salaisesta avaimesta, jolla on pääsy kyseiseen osoitteeseen lähetettyihin rahoihin. Paperisen lompakon sisältämät rahat voidaan "pyyhkäistä" takaisin sähköiseen muotoon esimerkiksi lukemalla paperille tulostettu QR-koodi. Käytännössä tämä tarkoittaa uutta transaktiota sähköisen lompakon hallinnoimaan osoitteeseen.

Bitcoineja hankitaan tyypillisesti joko vaihtamalla tavallista rahaa virtuaalivaluutaksi jossain vaihtopalvelussa tai ansaitsemalla niitä myymällä jotain taikka louhimalla. Vaihtopalvelut joko vaihtavat rahat itse tai toimivat ostajien ja myyjien välissä välittäjinä. Maksun voi suorittaa myyjästä riippuen erilaisilla keinoilla, mukaan lukien käteinen raha, pankkisiirto sekä PayPal-maksu. Helsingin asematunneliin avattiin vuonna 2013 Levykauppa Äxän tiloihin Euroopan ensimmäinen Bitcoin-automaatti, jolla voi vaihtaa käteistä rahaa Bitcoineiksi.

Koska bitcoineilla maksut tapahtuvat transaktiokohtaisilla osoitteilla, ovat maksutapahtumat pseudonyymejä. Tämän nimettömyyden ansiosta bitcoin on positiivisten käyttötarkoitustensa lisäksi kerännyt suosiota myös rikollisessa toiminnassa, kuten rahanpesussa ja huumekaupassa. Nimettömästi tapahtuva valvomaton rahaliikenne aiheuttaa päänvaivaa myös verotuksellisesti, sekä verottajalle että rehelliselle veronmaksajalle. Esimerkiksi Suomen lainsäädännössä bitcoinilla ei ole virallisen valuutan asemaa ja verojen maksamisesta voi aiheutua monimutkaisia kiemuroita. Verohallitus on julkaissut vuonna 2013 ohjeen virtuaalivaluuttojen tuloverotuksesta. Ohje sisältää muutamia esimerkkitilanteita.

Bitcoinin rinnalle on noussut myös muita, yleensä samankaltaisiin algoritmeihin perustuvia, virtuaalivaluuttoja, kuten Litecoin, Dogecoin ja Namecoin.

Kotisivu
http://bitcoin.org
Bitcoin Core -ohjelman lisenssi
MIT
Toimii seuraavilla alustoilla
Kaikki
Asennus
Bitcoineja voi käyttää www-pohjaisina palveluina toimivilla lompakoilla tai omalle laitteelle asennetuilla lompakko-ohjelmilla. Ohjelmia voi etsiä bitcoin.org sivustolta.
Linkkejä
Bittiraha.fi
Verohallinnon ohje
How Bitcoin Works in 5 Minutes (Technical) (Youtube)
How Bitcoin Works Under the Hood (Youtube)

Teksti: Pesasa
Kuvakaappaukset: Pesasa

by pesasa at November 01, 2014 04:25 PM

Riku Voipio

Using networkd for kvm tap networking

Setting up basic systemd-network was recently described by Joachim, and the post inspired me to try it as well. The twist is that in my case I need a bridge for my KVM with Lava server and arm/aarch64 qemu system emulators...

For background, qemu/kvm support a few ways to provide network to guests. The default is user networking, which requires no privileges, but is slow and based on ancient SLIRP code. The other common option is tap networking, which is fast, but complicated to set up. Turns out, with networkd and qemu bridge helper, tap is easy to set up.


$ for file in /etc/systemd/network/*; do echo $file; cat $file; done
/etc/systemd/network/eth.network
[Match]
Name=eth1
[Network]
Bridge=br0

/etc/systemd/network/kvm.netdev
[NetDev]
Name=br0
Kind=bridge

/etc/systemd/network/kvm.network
[Match]
Name=br0
[Network]
DHCP=yes

Diverging from Joachims simple example, we replaced "DHCP=yes" with "Bridge=br0". Then we proceed to define the bridge (in the kvm.netdev) and give it an ip via dhcp in kvm.network. From the kvm side, if you haven't used the bridge helper before, you need to give the helper permissions (setuid root or cap_net_admin) to create a tap device to attach on the bridge. The helper needs an configuration file to tell what bridge it may meddle with.

# cat > /etc/qemu/bridge.conf <<__END__
allow br0
__END__
# setcap cap_net_admin=ep /usr/lib/qemu/qemu-bridge-helper
Now we can start kvm with bridge networking as easily as with user networking:

$ kvm -m 2048 -drive file=jessie.img,if=virtio -net bridge -net nic,model=virtio -serial stdio
The manpages systemd.network(5) and systemd.netdev(5) do a great job explaining the files. Qemu/kvm networking docs are unfortunately not as detailed.

by Riku Voipio (noreply@blogger.com) at November 01, 2014 10:21 AM

October 30, 2014

Wikimedia Suomi

Bringing Cultural Heritage to Wikipedia

Photo by: Teemu Perhiö, CC-BY-SA 4.0

Course participants editing Wikipedia at the first gathering at the Finnish Broadcasting Company Yle.

Bring Culture to Wikipedia editathon course is already over halfway through its span. The course, co-organised by Wikimedia Finland, Helsinki Summer University and six GLAM organisations, aims to bring more Finnish cultural heritage to Wikipedia.

The editathon gatherings are held at various organisation locations, where the participants get a ”look behind the scenes” – the organisations show their archives and present their field of expertise. The course also provides a great opportunity to learn basics of Wikipedia, as experienced wikipedian Juha Kämäräinen gives lectures at each gathering.

Photo by: Teemu Perhiö, CC-BY-SA 4.0

Yle personnel presenting the record archives.

The first course gathering was held at the Archives of the Finnish Broadcasting Company Yle on 2nd October. The course attendees got familiar with the Wikipedia editor and added information to Wikipedia about the history of Finnish television and radio. The representatives of Yle also gave a tour of the tape and record archives. Quality images that Yle opened earlier this year were added to articles.

Course attendee Maria Koskijoki appreciated the possibility to get started without prior knowledge.

”The people at Yle offered themes of suitable size. I also got help in finding source material.”

Cooperation with GLAMS

Sketch_archives_(15617786792) (1)

Finnish National Gallery personnel presenting sketch archives at the Ateneum Arts Museum.

This kind of course is a new model of cooperation with GLAM organisations. The other cooperating organisations are Svenska litteratursällskapet i Finland, The Finnish National Gallery, Helsinki City Library, The Finnish Museum of Photography and Helsinki Art Museum. Wikimedia Finland’s goal is to encourage organisations in opening their high-quality materials to a wider audience.

There are many ways to upload media content to Wikimedia Commons. One of the new methods is using GLAMWiki Toolset for batch uploads. Wikimedia Finland invited the senior developer of the project, Dan Entous, to hold a GW Toolset workshop for the representatives of GLAMs and staff of Wikimedia Finland in Sebtember before the beginning of the course. The workshop was first of its kind outside Netherlands.

Course coordinator Sanna Hirvonen says that GLAM organisations have begun to see Wikipedia as a good channel to share their specialised knowledge.

“People find the information from Wikipedia more easily than from the homepages of the organisations.”

This isn’t the first time that Wikimedians and culture organisations in Finland co-operate: last year The Museum of Contemporary Art Kiasma organised a 24-hour Wikimarathon in collaboration with Wikimedia Finland. Over 50 participants added information about art and artists to Wikipedia. Wiki workshops have been held at the Rupriikki Media Museum in Tampere and in Ateneum Art Museum, Helsinki.

Editing_Wikipedia_(15431384190)

Wikipedian guiding a newcomer at the Ateneum Arts Museum.

Images taken on the course can be viewed in Wikimedia Commons.
All Photos by Teemu Perhiö. CC-BY-SA 4.0.

by Teemu Perhiö at October 30, 2014 07:22 PM

October 29, 2014

Viikon VALO

4x43 Tinfoil for Facebook - Viikon VALO #199

Tinfoil for Facebook on vaihtoehtoinen, hieman tietoturvallisempi Facebook-sovellus Android-laitteille.
valo199-tinfoil_for_facebook.png Tinfoil for Facebook on pieni sovellus, joka muodostaa "hiekkalaatikon" Facebookin web-käyttöliittymän ympärille. Facebookin omaan Android-sovellukseen verrattuna Tinfoil for Facebook vaatii huomattavan paljon vähemmin käyttöoikeuksia. Ohjelma ei pyydä oikeuksia yhteystietoihin, ei kalenteriin ikä paljon mihinkään muuhunkaan. Oikeuksia vaaditaan vain verkkoviestintään, tallennuksiin sekä likimääräiseen (verkkopohjaiseen) paikantamiseen. Näistäkin paikannusta käytetään vain, jos käyttäjä itse sallii sen ohjelman asetuksista. Selaimella käytettävään versioon verrattuna erillinen sovellus on puolestaan miellyttävämpi käyttää. Lisäksi web-näkymää käyttävä erillinen sovellus suojelee käyttäjää mahdollisilta selaushistorian urkkimisilta sekä estää Facebookiin kirjautuneen käyttäjän seuraamisen muilla, Facebook-nappuloilla varustetuilla sivustoilla.

Ohjelman käyttöliittymä sisältää kokoruudun kokoisen web-näkymän, johon se lataa Facebookin mobiilisivun tai valinnaisesti täyden työpöytäversion. Web-näkymän lisäksi käyttöliittymään kuuluun ruudun oikeasta reunasta esiin pyyhkäistävä valikko, johon on koottu muutamia yleisimmin tarvittavia pikavalintoja, kuten sivun ylälaitaan siiryminen, sivun päivitys, uutisvirtaan ja ilmoitusnäkymään siirtymiset sekä päivitysten lisääminen. Lisäksi valikosta ovat valittavissa ohjelman asetukset sekä sen sulkeminen.

Ohjelman asetuksista voi halutessaan sallia karkean paikannuksen käytön, sallia linkkien avaamisen sovelluksessa itsessään ulkoisen selaimen sijaan taikka estää kuvien lataamisen verkkoliikenteen vähentämiseksi. Nämä vaihtoehdot ovat kaikki oletuksena pois päältä. Asetuksien kautta ohjelman verkkoliikenteen voi myös halutessaan ohjata välityspalvelimen kautta.

Tinfoil for Facebook voi toimia ratkaisuna myös paljon huomiota herättäneeseen keskustelutoiminnallisuuden poistumiseen Facebookin omasta sovelluksesta, sillä keskustelutoiminto on käytettävissä web-näkymässä entiseen tapaan. Myös ohjelman huomattavasti pienempi koko on etu verrattuna "aitoon" Facebook-sovellukseen.

Ohjelman voi asentaa Googlen Play-kaupasta taikka F-Droid-sovellusvalikoimasta.

Ohjelman huonona puolena F-Droid-palvelu mainitsee sen, että sen käyttö vaatii kirjautumisen Facebookiin.

Kotisivu
https://github.com/velazcod/Tinfoil-Facebook
F-Droid-sivu
Google Playssa
Lisenssi
Apache2
Toimii seuraavilla alustoilla
Android, SailfishOS
Asennus
Ohjelman voi asentaa helpoiten F-Droidin tai Googlen Playn kautta.

Teksti: Pesasa
Kuvakaappaukset: Pesasa

by pesasa at October 29, 2014 06:45 PM

October 17, 2014

Viikon VALO

4x42 Free Pascal - Viikon VALO #198

Free Pascal on avoimeen lähdekoodiin perustuva Pascal-ohjelmointikielen käännin. Free Pascal toimii useissa eri käyttöjärjestelmissä ja eri suorittimilla. valo198-free_pascal.png Free Pascal kääntää Object Pascalia ja muutamia Pascalin murteita, muun muassa Turbo Pascalia, Delphiä ja Mac Pascalia.

Pascal oli alun perin proseduraalinen ohjelmointikieli. Niklaus Wirth kehitti sen 1960- ja 1970-lukujen vaihteessa Algol-kielen pohjalta erityisesti opetuskäyttöä ajatellen. Ohjelmointikieli on nimetty matemaatikko Blaise Pascalin mukaan. Pascalia hyvin lähellä ovat Wirthin myöhemmin kehittämät ohjelmointikielet Modula-2 ja Oberon, joita voidaan pitää Pascalin "jälkeläisinä". Pascal oli suosittu kieli opetuskäytössä 1970-luvulta 1990-luvun alkupuolelle asti kunnes C-kieli syrjäytti sen.

Pakollinen esimerkki, eli Hello World Pascalilla. Tässä tosin sanotaan "Goodbye, World!", mutta tämä koodi löytyi Rosetta Codesta enkä halunnut muokata.

  program byeworld;
  begin
    writeln('Goodbye, World!');
  end.
FreePascal-Arts_et_Metiers_Pascaline_dsc03869.jpg

Blaise Pascalin kehittämä ensimmäinen mekaaninen laskin

Jos kokeilet tuota ohjelma graafisessa työpöytäympäristössä saattaa ohjelman ikkuna sulkeutua välittömästi ohjelman suorituksen päätyttyä, jolloin ei ehdi näkemään mitä ohjelma tulosti ruudulle. Tähän auttaa ennen end. -riviä lisätty readln; -komento. Tällöin ohjelma jää odottamaan Enter-näppäimen painallusta, ja loppuu vasta sitten.

Ammattikäyttöön Pascal-kieli tuli Borlandin Turbo Pascal -kielen ansiosta 1980-luvulla. Silloiset Turbo Pascalin ylivoimaiset ominaisuudet aiheuttivat sen, että muut ohjelmointikieliä tekevät ohjelmistotalot luopuivat vähitellen omista Pascal-kääntäjistään.

Pascal-kielen pohjalta on kehitetty Object Pascal lisäämällä Pascaliin olio-ohjelmointiin liittyviä ominaisuuksia. Lisäksi siihen kuuluvat poikkeukset ja niiden hallinta. Ohjelman voi jakaa käännösyksiköihin helpottamaan isojen ohjelmistoprojektien koostamista. Alkuperäinen Pascal oli tarkoitettu ohjelmoinnin opetukseen, eikä siinä ollut mukana ammattiohjelmoijien tarvitsemia ominaisuuksia. Free Pascalissa ei tätä puutetta ole, sillä voi koodata pelejäkin, esimerkiksi Slot Cars: The Video Game, josta on video traileri.

Kuten Turbo Pascal on Free Pascalkin hyvin nopea käännin. Kuvakaappauksissa oleva pidempi testiohjelma kääntyi viidessä sadasosasekunnissa. Laajemmat ohjelmat vievät enemmän aikaa, mutta ne taas voi jakaa pienempiin käännösyksiköihin ja useimmiten muutetaan vain yhtä käännösyksikköä käännöskertojen välillä, jolloin vain se yksi käännösyksikkö on käännettävä uudestaan ja linkattava suorituskelpoinen ohjelma siitä ja muista jo valmiiksi käännetyistä osista.

Mukana tulee ohjelmankehitysympäristö eli IDE, joka käynnistyy komennolla fp. IDE on melko samanlainen kuin Turbo Pascalin vastaava oli aikoinaan. Semmoinen parannus on, että hiiri toimii, eli ei tarvita niin paljoa pikanäppäinten käyttöä.

Free Pascalille pitää kertoa mitä murretta käännettävän ohjelman on tarkoitus olla. Tämä tehdään komentorivillä tarkentimella -M tai IDE:ssä valikossa Options | Compiler | Compiler Mode.

Free Pascal osasi kääntää vuonna 1995 Turbo Pascal 7:lla tekemäni ohjelman. Silloin oli käytössä DOS, nyt ohjelmaa sai ajettua Linuxissa. Aika ei ole syönyt koodiani piloille. Sen verran piti säätää, että disketiltä Linuxiin lukemieni tiedostojen nimet oli kaikki isoilla kirjaimilla. Ohjelmakoodissa taas käännösyksiköt oli pienellä paitsi iso alkukirjain. Koska Linuxissa tiedostojen nimissä isoilla kirjaimilla on väliä, piti tiedostojen nimet korjata.

Free Pascalia hyödyntävä ja Free Pascalilla toteutettu graafinen kehitysympäristö on Lazarus.

Kotisivu
http://freepascal.org
Lisenssi
GNU GPL
Toimii seuraavilla alustoilla
DOS, FreeBSD, Haiku, Linux, Mac OS X/iOS/Darwin, MorphOS, Nintendo GBA, Nintendo DS, Nintendo Wii, OS/2, WinCE, Win32, Win64.
Asennus
Linux-jakeluissa tulee jakelun omista pakettivarastoista. Muille käyttöjärjestelmille löytyy asennustiedosto Free Pascalin webbisivuilta.
Käyttöohjeet
Webbisivuilla on paljon ohjeita: http://www.freepascal.org/docs.var.
Free Pascalin wikissä on suomenkielisiäkin sivuja: http://wiki.freepascal.org/Main_Page/fi
Tietoa Pascal-kielestä suomeksi Webissä: http://www.cs.tut.fi/~jkorpela/Pascal.html
Ohjelmointiputkan Pascal-ohjelmointi: http://www.ohjelmointiputka.net/oppaat/opas.php?tunnus=pascal01
SchoolFreewaren opetusmateriaalia ja videoita: http://www.schoolfreeware.com/Free_Pascal_Tutorials.html
Youtubesta löytyy opetusvideoita.

Lisätietoja

Teksti: Taleman
Kuvakaappaukset: Taleman
Kuvituskuva: David Monniaux (CC-BY-SA, https://commons.wikimedia.org/wiki/File:Arts_et_Metiers_Pascaline_dsc038...)

by pesasa at October 17, 2014 07:22 PM

October 16, 2014

Wikimedia Suomi

Swedish Wikipedia grew with help of bots

robotitFor a very long time Finland was part of Sweden. Maybe that explains why the Finns now always love to compete with Swedes. And when I noticed that Swedish Wikipedia is much bigger than the Finnish one I started to challenge people in my trainings: we can’t let the Swedes win us in this Wikipedia battle!

I was curious about how they did it and later I found out they had used “secret” weapons: bots. So when I was visiting Wikimania on London on August I did some video interviews related to the subject.

First Johan Jönsson from Sweden tells more about the articles created by bots and what he thinks of them:

<iframe class="youtube-player" frameborder="0" height="382" src="http://www.youtube.com/embed/BjFXtWAgymw?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="625"></iframe>

Not everyone likes the idea of bot created articles and also Erik Zachte, Data Analyst at Wikimedia Foundation shared this feeling in the beginning. Then something happened and now he has changed his view.  Learn more about this in the end of this video interview:

<iframe class="youtube-player" frameborder="0" height="382" src="http://www.youtube.com/embed/UWBIYMUypWA?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="625"></iframe>

Now I am curious to hear your opinion about the bot created articles! Should we all follow the Swedes and grow the number of articles in our own Wikipedias?

PS. There are more Wikipedia related video interviews on my YouTube channel on a play list called Wiki Wednesday.

by Johanna Janhonen at October 16, 2014 10:13 AM

October 07, 2014

Ubuntu-blogi

Ubuntu Suomen keskustelualueet poissa käytöstä

Canonicalin ylläpito on havainnut tietomurron Ubuntu Suomen foorumilla ja sammuttanut palvelun turvatoimena. Tietomurron aiheutti mitä todennäköisimmin käytössä ollut vanhentunut sovellusversio. Ubuntu Suomen on ollut tarkoitus siirtyä toisen sovelluksen käyttöön jo ennen tietomurtoa, sillä ohjelmisto on tiedetty vanhentuneeksi eikä se ole ollut enää Canonicalin tukema.

Pyrimme saamaan uuden palvelun käyttöön mahdollisimman pian. Uudesta alustasta tai sen sijainnista ei ole vielä tietoa, mutta asiasta tiedotetaan heti kun mahdollista. Mahdollisten vanhojen foorumisisältöjen palauttamista tutkitaan (varmuuskopiot löytyvät, lähinnä kyse mihin ja missä muodossa ne laitetaan).

Toistaiseksi suosittelemme käyttämään Ubuntu Suomen Google+-yhteisöä (ks. myös profiilisivuFacebook-sivua tai IRC-kanavaa #ubuntu-fi Freenode-verkossa. Voit liittyä kanavalle selaimessa toimivalla asiakasohjelmalla osoitteessa https://kiwiirc.com/client/irc.ubuntu.com:+6697/#ubuntu-fi.

Lisäksi Linux.fi:n keskustelualueet ovat käytettävissä osoitteessa http://linux.fi/foorumi/.

Huomioi, että vaikka IRC-kanavalla näyttää olevan useampia ihmisiä paikalla, eivät kaikki heistä välttämättä lue kanavan keskustelua jatkuvasti. Vastauksen saaminen voi kestää pitkäänkin ja toivomme että jaksat odottaa vastausta, jotta ehdimme saada tilaisuuden auttaa sinua. Maltti on valttia!

Emme tiedä mitä tietoja keskustelualueilta on vuotanut, jos mitään, mutta käyttäjiltä piilotetuista tiedoista mm. sähköpostiosoitteet ja suolatut salasanojen tiivisteet ovat voineet vuotaa. Jos olet käyttänyt samaa salasanaa jossain muussa palvelussa, suosittelemme vaihtamaan sen.

by Tm_T at October 07, 2014 07:30 PM

October 01, 2014

Wikimedia Suomi

GLAMs and GLAMWiki Toolset

GLAMWiki Toolset project is a collaboration between various Wikimedia chapters and Europeana. The goal of the project is to provide easy-to-use tools to make batch uploads of GLAM (Galleries, Libraries, Archives & Museums) content to Wikimedia Commons. Wikimedia Finland invited the senior developer of the project, Dan Entous, to Helsinki to hold a GW Toolset workshop for the representatives of GLAMs and staff of Wikimedia Finland on 10th September. The workshop was first of its kind outside Netherlands.

GLAMWikiToolset training in Helsinki.

GLAMWikiToolset training in Helsinki. Photo: Teemu Perhiö. CC-BY

I took part in the workshop in the role of tech assistant of Wikimedia Finland. After the workshop I have been trying to figure out what is needed for using the toolset from a GLAM perspective. In this text I’m concentrating on the technical side of these requirements.

What is needed for GWToolset?

From a technical point of view, the use of GWToolset can be split into three sections. First there are things that must be done before using the toolset. The GWToolset requires metadata as a XML file that is structured in a certain way. The image files must also be addressable by direct URLs and the domain name of the image server must be added to the upload whitelist in Commons.

The second section concerns practices in Wikimedia Commons itself. This means getting to know the templates, such as institution, photograph, artwork and other templates, as well as finding the categories that are suitable for uploaded material. For someone who is not a Wikipedian – like myself – it takes a while to get know the templates and especially the categories.

The third section is actually making the uploads by using the toolset itself, which I find easy to use. It has a clear workflow and with little assistance there should be no problems for GLAMs using it. Besides, there is a sandbox called Commons Beta where one can rehearse before going public.

I believe that the bottleneck for GLAMs is the first section: things that must be done before using the toolset. More precisely, creating a valid XML file for the toolset. Of course, if an organisation has a competent IT department with resources to work with material donations to Wikimedia Commons, then there is no problem. However, this could be a problem for smaller – and less resourceful – organisations.

Converting metadata in practise

Like I said, the GWToolset requires an XML file with a certain structure. As far as I know, there is no information system that could directly produce such a file. However, most of the systems are able to export metadata in XML format. Even though the exported file is not valid for GWToolset, it can be converted into such with XSLT.

XSLT is designed to this specific task and it has a very powerful template mechanism for XML handling. This means that the amount of code stays minimal compared to any other options. The good news is that XML transformations are relatively easy to do.

XSLT is our friend when it comes to XML manipulation.

XSLT is our friend when it comes to XML manipulation.

In order to learn what is needed for such transforms with real data, I made couple of practical demos. I wanted to create a very lightweight solution for transforming the metadata sets for the GWToolset. Modern web browsers are flexible application platforms and for example web-scraping can be done easily through Javascript.

A browser-based solution has many advantages. The first is that every Internet user already has a browser. So there is no downloading, installing or configuring needed. The second advantage is that browser-based applications that use external datasets do not create traffic to the server where the application is hosted. Browsers can also be used locally. This allows organisations to download the page files, modify them, make conversions locally in-house, and have their materials on Wikimedia Commons.

XSLT requires of course a platform to run. There is a javascript library called Saxon-CE that provides the platform for browsers. So, a web browser offers all that is needed for metadata conversions: web scraping, XML handling and conversions through XSLT, and user interface components. Of course XSLT files can also be run in any other XSLT environment, like xsltproc.

Demos

Blenda and Hugo Simberg, 1896. source: The National Gallery of Finland

Blenda and Hugo Simberg, 1896. source: The National Gallery of Finland, CC BY 4.0

The first demo I created uses an open data image set published by the Finnish National Gallery. It consists of about one thousand digitised negatives of and by Finnish artist Hugo Simberg. The set also includes digitally created positives of images. The metadata is provided as a single XML file.

The conversion in this case is quite simple, since the original XML file is flat (i.e. there are no nested elements). Basically the original data is passed through as it is with few exceptions.  The “image” element in original metadata includes only an image id, which must be expanded to a full URL. I used a dummy domain name here, since images are available as a zip-file and therefore cannot be addressed individually. Another exception is the “keeper” element, which holds the name of the owner organisation. This was changed from the Finnish name of the National Gallery to a name that corresponds to their institutional template name in Wikimedia Commons.

example record:
http://opendimension.org/wikimedia/simberg/xml/simberg_sample.xml
source metadata:
http://www.lahteilla.fi/simberg-data/#/overview
conversion demo:
http://opendimension.org/wikimedia/simberg/
direct link to the XSLT:
http://opendimension.org/wikimedia/simberg/xsl/simberg_clean.xsl

Photo: Signe Brander. source: Helsinki City Museum, CC BY-ND 4.0

Photo: Signe Brander. source: Helsinki City Museum, CC BY-ND 4.0

In the second demo I used the materials provided by the Helsinki City Museum. Their materials in Finna are licensed with CC-BY-ND 4.0. Finna is an “information search service that brings together the collections of Finnish archives, libraries and museums”. Currently there is no API to Finna. Finna provides metadata in LIDO format but there is no direct URL to the LIDO file. However, LIDO can be extracted from the HTML.

The LIDO format is a deep format, so the conversion is mostly picking the elements from the LIDO file and placing them in a flat XML file. For example, the name of the author in LIDO is in a quite deep structure.

example LIDO record:
http://opendimension.org/wikimedia/finna/xml/example_LIDO_record.xml
source metadata:
https://hkm.finna.fi/
conversion demo:
http://opendimension.org/wikimedia/finna/
(Please note that the demo requires that the same-origin-policy restrictions are loosened in the browser. The simplest way to do this is to use Google Chrome by starting it with a switch “disable-web-security”. In Linux that would be: google-chrome — disable-web-security and Mac (sorry, I can not test this) open -a Google\ Chrome –args –disable-web-security. For Firefox see this:http://www-jo.se/f.pfleger/forcecors-workaround)
direct link to the XSLT:
http://www.opendimension.org/wikimedia/finna/xsl/lido2gwtoolset.xsl

Conclusion

These demos are just examples, no actual data has yet been uploaded to Wikimedia Commons. The aim is to show that XML conversions needed for GWToolset are relatively simple and that in order to use GWToolset the organisation does not have to have an army of IT-engineers.

The demos could be certainly better. For example, the author name must be changed to reflect the author name in Wikimedia Commons. But again, that is just a few lines in XSLT and that is done.

by Ari Häyrinen at October 01, 2014 07:22 AM

September 25, 2014

Wikimedia Suomi

Avointa Suomea rakentamassa

Avoin Suomi 2014, 15.-16.9.2014. Kuva: Kimmo Virtanen. CC-BY.

Avoin Suomi 2014 -tapahtuma keräsi Helsingin Wanhaan Satamaan paljon erilaisia avoimen tiedon ja datan toimijoita. Wikimedia Suomi osallistui tapahtumaan näytteilleasettajana yhteisellä osastolla AvoinGLAM-verkoston kanssa. Ständillä esiteltiin Wikimedian toimintaa eri näkökulmista. GLAM-toimintaa edustivat myös Avoimen kulttuuridatan mestarikurssilla vapaaseen käyttöön avatut aineistot. Lisäksi Wikimedia osallistui eOppimiskeskuksen messuosastolle.

Avoin Suomi -tapahtuman päätarkoituksena oli esitellä erilaisia avoimen datan hankkeita ja rohkaista viranomaisia avaamaan tietovarantojaan. Avoin tieto koetaan Suomen valtion tasolta selvästi tärkeäksi. Tätä havainnollistaa se, että messujen järjestäjä oli valtioneuvoston kanslia, ja avauspuheen piti pääministeri Alexander Stubb.

Mitä Wikimedia sitten voi tarjota julkisen sektorin organisaatioille? Wikimedia tekee avointa tietoa käytännön tasolla. Wikimedian projektit Wikipedia ja mediatiedostojen jakoon tarkoitettu Commons ovat valmiiksi tunnettuja kansainvälisiä ja monikielisiä alustoja. Alustojen avulla sekä erilaiset kulttuuriorganisaatiot että hallintoviranomaiset voivat avata ja linkittää omia tietovarantojaan. Wikimedia on voittoa tavoittelematon järjestö, ja sen sivustot ovat maksuttomia ja mainoksista vapaita. Tänä syksynä Wikimedia Suomi järjestää Tuo kulttuuri Wikipediaan-koulutusta yhteistyössä kulttuuriorganisaatioiden kanssa.

Wikidata on uusi tapa avata koneluettavaa dataa vapaaseen käyttöön. Wikidatasta on tulossa kattava viitetietokanta, joka sisältää Wikipediaan sisältyvät aiheet. Julkishallinnon ja tutkijoiden olisi hyödyllistä käyttää sitä viitteenä. Wikidataa tullaan käyttämään alustana esimerkiksi Britanniassa ContentMine-hankkeessa, jossa tieteellisestä kirjallisuudesta louhitaan dataa vapaaseen käyttöön. Syksyllä Wikimedia Suomi järjestää Helsingissä Wikidata-koulutuksen, josta kiinnostuneita pyydämme ilmaisemaan kiinnostuksensa täällä.

Historialliset kartat ovat erinomainen esimerkki siitä, kuinka julkisen sektorin organisaatiot voivat työskennellä yhteistyössä voittoa tavoittelemattomien järjestöjen kanssa. Wikimaps on Wikimedia Suomen hanke, jossa tarkoituksena on kerätä Wikimedia Commonsiin vanhoja karttoja, vapaaehtoisvoimin sijoittaa ne koordinaatistoon ja hyödyntää niitä eri tavoin. Avoin Suomi -messuilla Wikimedian lisäksi vanhojen karttojen käyttöä esittelivät esimerkiksi Helsinki Region Infoshare ja Maanmittauslaitos, joilla molemmilla on paljon sekä historiallisia karttoja että muuta paikkatietoaineistoa.

Wikimedian osasto tapahtumassa. Kuva: Kimmo Virtanen. CC-BY.

Wikimedian osasto tapahtumassa. Kuva: Kimmo Virtanen. CC-BY.

Messuilla korostui toivomus, että tiedon digitalisoituminen ja hallinnon datan avaaminen johtaisivat uusiin yrityksiin ja sitä kautta talouskasvuun. Tapahtumassa esiteltiinkiin mielenkiintoisia uusia avoimen datan palveluita, kuten esimerkiksi kaupunginosien paikalliset tiedot ja uutiset yhteen paikkaan keräävä Nearhood ja ympäristöministeriön Envibase-hanke, jossa tuodaan ympäristötietoa avoimeen käyttöön.

Avoimen tiedon hankkeissa tietynlaisena ongelmana on ollut, että avoimen datan yhteiskunnallista vaikutusta on usein vaikea todistaa. Erityisesti kulttuuriaineistoissa tämä on yleinen ongelma, koska helposti mitattavissa olevia taloushyötyjä ei välttämättä ole. Tapahtuman pääpuhujista yhdysvaltalainen Beth Noveck korosti, että uskoon perustuvien argumenttien sijaan avoimen datan kentällä pitäisi alkaa löytää todisteita avoimen tiedon yhteiskunnallisista ja taloudellisista vaikutuksista. Noveck esitteli Iso-Britannian ja Yhdysvaltojen hankkeita, joissa ollaan monella tavalla pidemmällä kuin Suomessa. Ehkä näistä esimerkeistä voisi löytyä Suomessakin sovellettavia ideoita.

Henkilötiedot puhuttivat myös messuilla. MyData-paneelissa pohdittiin yksilön mahdollisuuksia ja rajoitteita hyödyntää omia henkilötietojaan. Open Knowledge Finland on laatinut aiheesta myös raportin. Henkilötiedot ovat mielenkiintoinen ja erilaisia mielipiteitä herättävä aihe. Toisaalta yleinen mielipide on vahvasti sen kannalla, että kansalaisilla tulisi olla oikeus hallita itsestään kerättyä tietoa. Toisaalta esimerkiksi Wikimedia Foundation on kritisoinut EU:n “right to be forgotten” -säädöksiä vahvasti, koska ne voivat johtaa lähdeaineistoja vääristävään sensuuriin.
Wikimedia Suomi kiittää Samsungia, joka avuliaasti lainasi messukäyttöön tietotekniikkaa.

by Sampo Viiri at September 25, 2014 01:00 PM

Building an Open Finland

Open Finland 2014. Image: Kimmo Virtanen. CC-BY.

Open Finland 2014. Image: Kimmo Virtanen. CC-BY.

During 15-16 September Finnish open knowledge and open data practitioners gathered in Helsinki at the Open Finland 2014 event. Wikimedia Finland participated with a joint exhibition stand together with the Finnish OpenGLAM network. We presented the various Wikimedia projects from different standpoints. The GLAM activities were also showcased with the Open Cultural data course’s recently published online contents. Wikimedia participated also at the Finnish eLearning Centre’s exhibition stand.

The main purpose of the Open Finland event was to showcase different open data projects and to encourage civil servants to open up their contents. Open knowledge is clearly valued by the Finnish government, demonstrated by the fact that the event was organised by the Prime Minister’s Office. PM Alexander Stubb was also present and gave the opening speech at the event.

What can Wikimedia offer to public sector organisations? Wikimedia does open knowledge on a practical level. Wikimedia projects Wikipedia and the media file repository Commons are already well-known international and multilingual platforms. With these platforms cultural heritage organisations and government offices can open up and link their own data. Wikimedia is non-profit and its pages are ad-free. This autumn Wikimedia Finland organises Wikipedia education together with Finnish cultural heritage institutions.

Wikidata is a new way to open machine-readable structured data for free use. Wikidata is becoming a comprehensive linked database that includes data used by Wikipedia and other Wikimedia projects. For civil servants and researchers it would be useful to use Wikidata as a reference tool. It will be utilised for example in the British ContentMine project that uses machines to mine and liberate facts from scientific literature. This autumn Wikimedia Finland will organise a Wikidata workshop. If you are interested, please sign up here!

Historical maps are an excellent example how governmental and cultural heritage institutions can partner with non-profit organisations. Wikimaps is an initiative by Wikimedia Finland to gather old maps in Wikimedia Commons, place them in world coordinates with the help of Wikimedia volunteers and start using them in different ways. The project brings together and further develops tools for the discovery of old maps and information about places through history. At the Open Finland event Wikimedia was not the only participating organisation that is dealing with old maps. For example Helsinki Region Infoshare and the National Land Survey of Finland have a wealth of historical maps and other geospatial open data, and some of them have already been published online free of charge.

Wikimedia Finland exhibition stand. Image: Kimmo Virtanen. CC-BY.

Wikimedia Finland exhibition stand. Image: Kimmo Virtanen. CC-BY.

At the event there was a clear desire that digitalisation and opening up government data would lead to new kind of entrepreneurship and thus to economic growth. Indeed there were interesting product launches, such as Nearhood which brings together news and other information related to a specific neighbourhood, or the environmental data project Envibase by the Ministry of the Environment.

Demonstrating the societal value of open data has been somewhat difficult. This is especially common in cultural heritage projects where in many cases there are no tangible financial benefits. Beth Noveck, one of the event’s keynote speakers, emphasised the need to search for evidence about the societal and financial value of open data. So far the arguments supporting open data have been too heavily based on faith, not evidence. Noveck displayed many projects in the UK and in the United States. Perhaps these examples could offer good ideas to circulate in Finland too.

Personal data was one of the key topics during the event. The MyData panelists pondered about the citizens’ possibilities and limitations to use data about themselves. Open Knowledge Finland has also published a report about the topic. Personal data is an interesting topic that raises differing opinions. On the one hand the public opinion is clearly in favour of individuals’ right to control data about themselves. On the other hand for example the Wikimedia Foundation has clearly criticised the recent “right to be forgotten” European Union legislation because it can lead to censorship that distorts online source material.

Wikimedia Finland would like to thank Samsung for lending us IT equipment for exhibition use.

by Sampo Viiri at September 25, 2014 01:00 PM

September 16, 2014

Henri Bergius

Flowhub Kickstarter delivery

It is now a year since our NoFlo Development Environment Kickstarter got funded. Since then our team together with several open source contributors has been busy building the best possible user interface for Flow-Based Programming.

When we set out on this crazy adventure, we still mostly had only NoFlo and JavaScript in mind. But there is nothing inherently language-specific in FBP or our UI, and so when people started making other runtimes compatible with the protocol we embraced the idea of full-stack flow-based programming.

Here is how the runtime registration screen looks with the latest release:

Flowhub Runtime Registration

This hopefully highlights a bit of the possibilities of what can be done with Flowhub right now. I know there are several other runtimes that are not yet listed there. We should have something interesting to announce in that space soon!

Live mode

The Flowhub release made today includes several interesting features apart from giving private repository access to our Kickstarter backers. One I'm especially happy about is what we call live mode.

The live mode, initially built by Lionel Landwerlin, enables Flowhub to discover and connect to running pieces of Flow-Based software running in different environments. With it you can monitor, debug, and modify applications without having to restart them!

We made a short demo video of this in action with Flowhub, Raspberry Pi and an NFC tag.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/EdgeSDFd9p0" width="560"></iframe>

Getting started

Our backers should receive an email today with instructions on how to activate their Flowhub plans. For those who missed the Kickstarter, there should be another batch of Flowhub pre-orders available soon.

Just like with Travis and GitHub, Flowhub is free for open source development. So, everybody should be able to start using it immediately even without a plan.

If you have any questions about Flow-Based Programming or how to use Flowhub, please check out the various ways to get in touch on the NoFlo support page.

Kickstarter Backer badge

by Henri Bergius (henri.bergius@iki.fi) at September 16, 2014 12:00 AM

August 21, 2014

Niklas Laxström

Midsummer cleanup: YAML and file formats, HHVM, translation memory

Wikimania 2014 is now over and that is a good excuse to write updates about the MediaWiki Translate extension and translatewiki.net.
I’ll start with an update related to our YAML format support, which has always been a bit shaky. Translate supports different libraries (we call them drivers) to parse and generate YAML files. Over time the Translate extension has supported four different drivers:

  • spyc uses spyc, a pure PHP library bundled with the Translate extension,
  • syck uses libsyck which is a C library (hard to find any details) which we call by shelling out to Perl,
  • syck-pecl uses libsyck via a PHP extension,
  • phpyaml uses the libyaml C library via a PHP extension.

The latest change is that I dropped syck-pecl because it does not seem to compile with PHP 5.5 anymore; and I added phpyaml. We tried to use sypc a bit but the output it produced for localisation files was not compatible with Ruby projects: after complaints, I had to find an alternative solution.

Joel Sahleen let me know of phpyaml, which I somehow did not found before: thanks to him we now use the same libyaml library that Ruby projects use, so we should be fully compatible. It is also the fastest driver of the four. Anyone generating YAML files with Translate is highly recommended to use the phpyaml driver. I have not checked how phpyaml works with HHVM but I was told that HHVM ships with a built-in yaml extension.

Speaking of HHVM, the long standing bug which causes HHVM to stop processing requests is still unsolved, but I was able to contribute some information upstream. In further testing we also discovered that emails sent via the MediaWiki JobQueue were not delivered, so there is some issue in command line mode. I have not yet had time to investigate this, so HHVM is currently disabled for web requests and command line.

I have a couple of refactoring projects for Translate going on. The first is about simplifying the StringMangler interface. This has no user visible changes, but the end goal is to make the code more testable and reduce coupling. For example the file format handler classes only need to know their own keys, not how those are converted to MediaWiki titles. The other refactoring I have just started is to split the current MessageCollection. Currently it manages a set of messages, handles message data loading and filters the collection. This might also bring performance improvements: we can be more intelligent and only load data we need.

Théo Mancheron competes in the men's decathlon pole vault final

Aiming high: creating a translation memory that works for Wikipedia; even though a long way from here (photo Marie-Lan Nguyen, CC BY 3.0)

Finally, at Wikimania I had a chance to talk about the future of our translation memory with Nik Everett and David Chan. In the short term, Nik is working on implementing in ElasticSearch an algorithm to sort all search results by edit distance. This should bring translation memory performance on par with the old Solr implementation. After that is done, we can finally retire Solr at Wikimedia Foundation, which is much wanted especially as there are signs that Solr is having problems.

Together with David, I laid out some plans on how to go beyond simply comparing entire paragraphs by edit distance. One of his suggestions is to try doing edit distance over words instead of characters. When dealing with the 300 or so languages of Wikimedia, what is a word is less obvious than what is a character (even that is quite complicated), but I am planning to do some research in this area keeping the needs of the content translation extension in mind.

by Niklas Laxström at August 21, 2014 04:24 PM

August 13, 2014

Riku Voipio

Booting Linaro ARMv8 OE images with Qemu

A quick update - Linaro ARMv8 OpenEmbbeded images work just fine with qemu 2.1 as well:

$ http://releases.linaro.org/14.07/openembedded/aarch64/Image
$ http://releases.linaro.org/14.07/openembedded/aarch64/vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img.gz
$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt \
-kernel Image -append 'root=/dev/vda2 rw rootwait mem=1024M console=ttyAMA0,38400n8' \
-drive if=none,id=image,file=vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.16.0-1-linaro-vexpress64 (buildslave@x86-64-07) (gcc version 4.8.3 20140401 (prerelease) (crosstool-NG linaro-1.13.1-4.8-2014.04 - Linaro GCC 4.8-2014.04) ) #1ubuntu1~ci+140726114341 SMP PREEMPT Sat Jul 26 11:44:27 UTC 20
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
...
root@genericarmv8:~#
Quick benchmarking with age-old ByteMark nbench:
Index Qemu Foundation Host
Memory 4.294 0.712 44.534
Integer 6.270 0.686 41.983
Float 1.463 1.065 59.528
Baseline (LINUX) : AMD K6/233*
Qemu is upto 8x faster than Foundation model on Integers, but only 50% faster on Math. Meanwhile, the Host pc spends 7-40x slower emulating ARMv8 than executing native instructions.

by Riku Voipio (noreply@blogger.com) at August 13, 2014 02:36 PM

August 05, 2014

Riku Voipio

Testing qemu 2.1 arm64 support

Qemu 2.1 was just released a few days ago, and is now a available on Debian/unstable. Trying out an (virtual) arm64 machine is now just a few steps away for unstable users:

$ sudo apt-get install qemu-system-arm
$ wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-arm64-disk1.img
$ wget https://cloud-images.ubuntu.com/trusty/current/unpacked/trusty-server-cloudimg-arm64-vmlinuz-generic
$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt -kernel trusty-server-cloudimg-arm64-vmlinuz-generic \
-append 'root=/dev/vda1 rw rootwait mem=1024M console=ttyAMA0,38400n8 init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring' \
-drive if=none,id=image,file=trusty-server-cloudimg-arm64-disk1.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.13.0-32-generic (buildd@beebe) (gcc version 4.8.2 (Ubuntu/Linaro 4.8.2-19ubuntu1) ) #57-Ubuntu SMP Tue Jul 15 03:52:14 UTC 2014 (Ubuntu 3.13.0-32.57-generic 3.13.11.4)
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
...
-snip-
...
ubuntu@ubuntu:~$ cat /proc/cpuinfo
Processor : AArch64 Processor rev 0 (aarch64)
processor : 0
Features : fp asimd evtstrm
CPU implementer : 0x41
CPU architecture: AArch64
CPU variant : 0x1
CPU part : 0xd07
CPU revision : 0

Hardware : linux,dummy-virt
ubuntu@ubuntu:~$
The "init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring" is ubuntu cloud stuff that will set the ubuntu user password to "randomstring" - don't use "randomstring" literally there, if you are connected to internets...

For more detailed writeup of using qemu-system-aarch64, check the excellent writeup from Alex Bennee.

by Riku Voipio (noreply@blogger.com) at August 05, 2014 07:45 PM

July 07, 2014

Niklas Laxström

Translatewiki.net summer update

It’s been a busy while since last update, but how could I have not worked on translatewiki.net? ;) Here is an update on my current activities.
In this episode:

  • we provide translations for over 70 % of users of the new Wikipedia app,
  • I read a book on networking performance and get needy for speed,
  • ElasticSearch tries to eat all of us and our memory,
  • HHVM finds the place not fancy enough,
  • Finns and Swedes start cooperating.

Performance

Naturally, I have been thinking of ways to further improve translatewiki.net performance. I have been running HHVM as a beta feature at translatewiki.net many months now, but I have kept turning it on and off due to stability issues. It is currently disabled, but my plan is to try the Wikimedia packaged version of HHVM. Those packages only work in Ubuntu 2014.04, so Siebrand and I first have to upgrade the translatewiki.net server from Ubuntu 2012.04, as we plan to later this month (July). (Update: done as of 2014-07-09, 14 UTC.)

Map of some translatewiki.net translators

A global network of translators is not served well enough from a single location

After reading a book about networking performance I finally decided to give a content distribution network (CDN) a try. Not because they can optimize and cache things on the fly [1], nor because the can do spam protection [2], but because CDN can reduce latency, which is usually the main bottleneck of web browsing. We only have single server in Germany, but our users are international. I am close to the server, so I have much better experience than many of our users. I do not have any numbers yet, but I will do some experiments and gather some numbers to see whether CDN helps us.

[1] MediaWiki is already very aggressive in terms of optimizations for resource delivery.
[2] Restricting account creation already eliminated spam on our wiki.

Wikimedia Mobile Apps

Amir and I have been closely working with the Wikimedia Mobile Apps team to ensure that their apps are well supported. In just a couple weeks, the new app was translated in dozens languages and released, with over 7 millions new installations by non-English users (74 % of the total).

In more detail, we finally addressed a longstanding issue in the Android app which prevented translation of strings containing links. I gave Yuvi access to synchronize translations, ensuring that translators have as much time as possible to translate and the apps have the latest updates before being released. We also discussed about how to notify translators before releases to get more translations in time, and about improvements to their i18n frameworks to bring their flexibility more in line with MediaWiki (including plural support).

To put it bluntly, for some reason the mobile i18n frameworks are ugly and hard to work with. Just as an example, Android did not support many languages at all just for one character too much; support is still partial. I can’t avoid comparing this to the extra effort which has been needed to support old versions of Internet Explorer: we would rather be doing other cool things, but the environment is not going to change anytime soon.

Search

I installed and enabled CirrusSearch on translatewiki.net: for the first time, we have a real search engine for all our pages! I had multiple issues, including running a bit tight on memory while indexing all content.

Translate’s translation memory support for ElasticSearch has been almost ready for a while now. It may take a couple months before we’re ready to migrate from Solr (first on translatewiki.net, then Wikimedia sites). I am looking forward to it: as a system administrator, I do not want to run both Solr and ElasticSearch.

I want to say big thanks to Nik for helping both with the translation memory ElasticSearch backend and my CirrusSearch problems.

Wikimedia Sweden launches a new project

I am expecting to see an increased activity and new features at translatewiki.net thanks to a new project by Wikimedia Sweden together with InternetFonden.Se. The project has been announced on the Wikimedia blog, but in short they want to bring more Swedish translators, new projects for translation and possibly open badges to increase translator engagement. They are already looking for feedback, please do share your thoughts.

by Niklas Laxström at July 07, 2014 09:44 AM

May 08, 2014

Riku Voipio

Arm builder updates

Debian has recently received a donation of 8 build machines from Marvell. The new machines come with Quad core MV78460 Armada XP CPU's, DDR3 DIMM slot so we can plug in more memory, and speedy sata ports. They replace the well served Marvell MV78200 based builders - ones that have been building debian armel since 2009. We are planning a more detailed announcement, but I'll provide a quick summary:

The speed increase provided by MV78460 can viewed by comparing build times on selected builds since early april:

Qemu build times.

We can now build Qemu in 2h instead of 16h -8x faster than before! Certainly a substantial improvement, so impressive kit from Marvell! But not all packages gain this amount of speedup:

webkitgtk build times.

This example, webkitgtk, builds barely 3x faster. The explanation is found from debian/rules of webkitgkt:

# Parallel builds are unstable, see #714072 and #722520
# ifneq (,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# NUMJOBS = $(patsubst parallel=%,%,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# MAKEARGUMENTS += -j$(NUMJOBS)
# endif
The old builders are single-core[1], so the regardless of parallel building, you can easily max out the cpu. New builders will use only 1 of 4 cores without parallel build support in debian/rules.

During this buildd cpu usage graph, we see most time only one CPU is consumed. So for fast package build times.. make sure your packages supports parallel building.

For developers, abel.debian.org is porter machine with Armada XP. It has schroot's for both armel and armhf. set "DEB_BUILD_OPTIONS=parallel=4" and off you go.

Finally I'd like to thank Thomas Petazzoni, Maen Suleiman, Hector Oron, Steve McIntyre, Adam Conrad and Jon Ward for making the upgrade happen.

Meanwhile, we have unrelated trouble - a bunch of disks have broken within a few days apart. I take the warranty just run out...

[1] only from Linux's point of view. - mv78200 has actually 2 cores, just not SMP or coherent. You could run an RTOS on the other core while you run Linux on the other.

by Riku Voipio (noreply@blogger.com) at May 08, 2014 07:14 PM

May 07, 2014

Henri Bergius

Flowhub public beta: a better interface for Flow-Based Programming

Today I'm happy to announce the public beta of the Flowhub interface for Flow-Based Programming. This is the latest step in the adventure that started with some UI sketching early last year, went through our successful Kickstarter — and now — thanks to our 1 205 backers, it is available to the public.

Getting Started

This post will go into more detail on how the new Flowhub interface works in a bit, but for those who want to dive straight in, here are the relevant links:

Make sure to read the Getting Started guides and check out the Flowhub FAQ. There is also a new video available:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/8Dos61_6sss" width="640"></iframe>

Both the web version and the Chrome app are built following the offline first philosophy, and keep everything you need stored locally inside your browser. The Chrome app and the upcoming iOS and Android builds will enable us to later introduce capabilities that are not possible inside regular browsers, like talking directly to MicroFlo runtimes over USB or Bluetooth. But other than that they're similar in features and user experience.

New User Interface

If you read the NoFlo Update from last October, you might notice that the new Flowhub user interface looks and feels quite different from it. Main screen of new Flowhub UI

Graph editing in new Flowhub UI

This new design was implemented to improve touch-screen friendliness, as well as to give Flowhub a more focused, unique look. It also allowed us to follow some interesting UX paths that I'll explain next.

Zooming

One typical problem in visual programming tools is that they can become quite cluttered with information. To solve this, we utilized the concept of Zooming User Interfaces, which allow us to show a clear overview of a program when zoomed out, and reveal all kinds of detail about it when zoomed in.

Zoomed out

Zoomed in

Zooming works with two-finger scroll on typical desktop computers, or with the pinch gesture on touch-enabled devices.

Pie Menu

Another interface concept that we used to make interactions faster and more contextual is Pie Menus.

For example, you can easily navigate to subgraphs and component source code with the menu:

Navigating with the Pie Menu

When you have selected multiple nodes, you can use the menu to group them or move them to a new subgraph:

Group selections with the Pie Menu

The menu can also be used for removing edges or nodes:

Deleting an edge with the Pie Menu

You can activate the pie menu in the graph editor with a right mouse click, or with a long press on touch-enabled devices.

Component Editor

Another new major feature is in-app component editing. If your runtime supports it, you can at any time create or modify custom components for your project and they'll become immediately available for your graphs.

Creating a new component

Component Editing

The programming languages available for component creation depend on the runtime. With NoFlo these are JavaScript and CoffeeScript. With another runtime they might be C, Java, or Python.

Offline First

While some claim that in reality you're never offline, the reality is that there are many situations where Internet connectivity is either not available, unreliable, or simply expensive. Think of a typical conference or a hackathon for instance.

Because of this — and to give software developers the privacy they deserve — Flowhub has been designed to work "offline first". All your graphs, projects, and custom components are stored locally in your browser's Indexed Database and only transmitted over the network when you wish to push to a GitHub project, or interact with a remote runtime.

We're following a very similar UI concept as Amazon Kindle in that you can download projects locally to your device, or browse the ones you have available in the cloud:

Local and remote projects

At any point you can push your changes to a graph or a component to GitHub:

Pushing to GitHub

Runtime discovery happens through a central service, but once you know the address of your FBP runtime, the communications between it and your browser will happen directly. This makes it easy to work with Node.js projects running on your own machine even when offline.

Cross-platform, Full-stack

When we launched the NoFlo UI Kickstarter, we were initially only thinking about how to support NoFlo in different environments. But in the course of development we ended up defining a network protocol for FBP that enabled us to move past just a single FBP environment and towards supporting all of them. This is what prompted the Flowhub rebranding.

Since then, the number of supported FBP environments has been growing. Here is a list of the ones I'm aware of:

I hope that the developers of other FBP environments like JavaFBP and GoFlow add support for the FBP protocol soon so that they can also utilize the Flowhub interface.

Open Source vs. Paid

As promised in our Kickstarter, the NoFlo Development Environment is an open source project available under the MIT license.

Flowhub is a branded and supported instance of that with some additional network services like the Runtime Registry.

NoFlo UI vs. Flowhub

The Flowhub plans allow us to continue development of this Flow-Based Programming toolset, as well as to provide the various network services needed for making the experience smooth.

Just like with GitHub, Flowhub provides a free environment for anybody working on public and open source projects. Private projects need a paid plan.

Kickstarter & Pre-Ordered Plans

It is likely that many readers of this post already supported our Kickstarter or pre-ordered a Flowhub plan. Since Flowhub is still in beta, we haven't activated your plans yet. So for now, everybody is using Flowhub with the free plan.

We will be rolling out the paid plans and Kickstarter rewards towards the end of the beta testing period.

Feel free to already log in and start using Flowhub, however! The plan will be added to your account when we feel the software is ready for it.

Examples

Here are some examples of things you can build with Flowhub targeting web browsers:

For a more comprehensive cross-platform project, see my Building an Ingress Table with Flowhub post.

There is also an ongoing Google Summer of Code project to port various Meemoo apps to Flowhub. This will hopefully result in a lot more demos.

Next Steps

The main purpose of this public beta is to allow our backers and other FBP enthusiasts an early access to the Flowhub user interface. Now we will focus on stabilization and bug fixing, aided by the NoFlo UI issue tracker. We're also gathering feedback from beta testers in form of user surveys and will utilize those to prioritize both bug fixing and feature work.

Flowhub team testing the UI

Right now the main areas of focus are:

We hope to release the stable version of Flowhub in summer 2014.

by Henri Bergius (henri.bergius@iki.fi) at May 07, 2014 12:00 AM

May 02, 2014

Henri Bergius

Flowhub and the GNOME Developer Experience

I've spent the last three days in the GNOME Developer Experience hackfest working on the NoFlo runtime for GNOME with Lionel Landwerlin.

GNOME Developer Experience hackfest participants

What the resulting project does is give the ability to build and debug GNOME applications in a visual way with the Flowhub user interface. You can interact with large parts of the GNOME API using either automatically generated components, or hand-built ones. And while your software is running, you can see all the data passing through the connections in the Flowhub UI.

GNOME development in Flowhub

The way this works is the following:

  • You install and run the noflo-gnome runtime
  • The runtime loads all installed NoFlo components and dynamically registers additional ones based on GObject Introspection
  • The runtime pings Flowhub's runtime registry to notify the user that it is available
  • Based on the registry, the runtime becomes available in the UI
  • After this, the UI can start communicating the with runtime. This includes loading and registering components, and creating and running NoFlo graphs
  • The graphs are run inside Gjs

Creating a new NoFlo GNOME project

While there is still quite a bit of work to be done in exposing more of the GNOME API as flow-based components, you can already do quite a bit with this. In addition to building simple UIs with GTK+, working with Clutter animations was especially fun. With NoFlo, every running graph is "live", and so you can easily modify the various parameters and add new functionality while the software is running, and see those changes take effect immediately.

Here is a quick video of building a simple GTK application that loads a Glade user interface definition, runs it as a new desktop window, and does some signal handling:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/uyuoP3sjI6g" width="480"></iframe>

If you're interested in visual, flow-based programming on the Linux desktop, feel free to try out the noflo-gnome project!

There are still bugs to squish, documentation to write, and more APIs to wrap as components. All help in those is more than welcome.

by Henri Bergius (henri.bergius@iki.fi) at May 02, 2014 12:00 AM

April 09, 2014

Ubuntu-blogi

Ota Googlen palvelut käyttöön Ubuntussa!

Usein kuulen purnattavan, että kun Ubuntussa ei ole kaikkia ominaisuuksia mihin windowsissa on tottunut. Asia on ihan totta ja se on myönnettävä, mutta asiat ovat viimeaikoina menneet parempaan suuntaan. Annan tässä kirjoituksessa muutamia vinkkejä, joilla saat ubuntu -ympäristöstä vielä enemmän irti chrome -selaimen avulla.
Oikeastaan kaikki googlen palvelut ovat sidoksissa heidän Chrome ja Chromium -selaimiinsa. Itse huomasin tämän noin vuosi sitten, kun päivitin yritykseni ns. “konttorikoneen” ubuntuun. Olin juuri ottanut käyttöön Drive -pilvipalvelun ja olin turhautunut siitä, ettei Google tarjonnut natiivia työpöytäohjelmaa Ubuntulle. Windowsissa olin turhautunut ohjelman hitauteen.

Aikani googleteltuani törmäsin sivustoihin Omg! Ubuntu! sekä Omg! Chrome!  Nämä sivut ovat saman ylläpitäjän sivustoja, mutta tarjoavat päivittäin uusia artikkeleita aiheisiinsa liittyen. Syvennyin artikkeleihin paremmin – ja löysin tavan jolla tuoda googlen ohjelmat paremmin saataville työpöydällä.

Koska olen Drive -käyttäjä, halusin saada sen tarjoamat ohelmat helposti käyttöön. Tämä onnistui helposti, annan ohjeet tässä:

1) Kirjaudu chromiumiin tai chromeen sisään
2) Sovellukset -välilehdellä valitse haluamasi ohelmat klikkaamalla hiiren oikealla kuvakkeiden päällä ja valitse “luo pikakuvake”
3) Tallenna pikakuvake haluamaasi kansioon
4) Navigoi itsesi takaisin valitsemaasi kansioon, valitse kuvakkeet ja raahaa ne ubuntun työpöydän palkkiin

Tadaa! Nyt käytössäsi on googlen toimisto-ohjelmat, pilvi sekä ihan mitä vain haluat asentaa chromen sovelluskaupasta!

 

Tämän lisäksi tallennat tiedostosi suoraan pilveen, jolloin pääset niihin käsiksi mistä vain, eikä sinun tarvitse huolehtia varmuuskopioista.

Uskoisin tästä olevan hyötyä sellaisille käyttäjille, jotka ovat tähän saakka käyttäneet ubuntu onea (joka tulee katoamaan käytöstä lähiaikoina) ja niille, jotka ovat tuskastuneet tiedostojen ajantasalla pitämiseen. Omalla kohdallani työni on niin kovin liikkuvaa ja tarvitsen rajoittaman pääsyn tiedostoihini, ja tämä tapa on helpottanut käyttöäni huomattavasti.

by presidentti at April 09, 2014 05:45 PM

March 19, 2014

Losca

Qt 5.2.1 in Ubuntu

Ubuntu running Qt 5.2.1
Ubuntu running Qt 5.2.1
Qt 5.2.1 landed in Ubuntu 14.04 LTS last Friday, hooray! Making it into a drop-in replacement for Qt 5.0.2 was not trivial. Because of the qreal change, it was decided to rebuild everything against the new Qt, so it was an all at once approach involving roughly 130 source packages while the parts were moving constantly. The landing last week meant pushing to archives around three thousand binary packages - counting all six architectures - with the total size of closer to 10 gigabytes.

The new Qt brings performance and features to base future work on, and is a solid base for the future of Ubuntu. You may be interested in the release notes for Qt 5.2.0 and 5.2.1. The Ubuntu SDK got updated to Qt Creator 3.0.1 + new Ubuntu plugin at the same time, although updates for the older Ubuntu releases is a work in progress by the SDK Team.

How We Got Here

Throughout the last few months before the last joint push, I filed tens of tagged bugs. For most of that time I was interested only in build and unit test results, since even tracking those was quite a task. I offered simple fixes here and there myself, if I found out a fix.

I created automated Launchpad recipe builds for over 80 packages that rely on Qt 5 in Ubuntu. Meanwhile I also kept on updating the Qt packaging for its 20+ source packages and tried to stay on top of Debian's and upstream's changes.

Parallel to this work, some like the Unity 8 and UI Toolkit developers started experimenting with my Qt 5.2 PPA. It turned out the rewritten QML engine in Qt 5.2 - V4 - was not entirely stable when 5.2.0 was released, so they worked together with upstream on fixes. It was only after 5.2.1 release that it could be said that V4 worked well enough for Unity 8. Known issues like these slowed down the start of full-blown testing.

Then everything built, unit tests passed, most integration tests passed and things seemed mostly to work. We had automated autopilot integration testing runs. The apps team tested through all of the app store to find out whether some needed fixes - most were fine without changes. On top of the found autopilot test failures and other app issues, manual testing found a few more bugs

Sudoku
Some critical pieces of software
like Sudoku needed small fixing
Finally last Thursday it was decided to push Qt in, with a belief that the remaining issues had fixes in branches or not blockers. It turned out the real deployment of Qt revealed a couple of more problems, and some new issues were raised to be blockers, and not all of the believed fixes were really fixing the bugs. So it was not a complete success. Considering the complexity of the landing, it was an adequate accomplishment however.

Specific Issues

Throughout this exercise I bumped into more obstacles that I can remember, but those included:
  • Not all of the packages had seen updates for months or for example since last summer, and since I needed to rebuild everything I found out various problems that were not related to Qt 5.2
  • Unrelated changes during 14.04 development broke packages - like one wouldn't immediately think a gtkdoc update would break a package using Qt
  • Syncing packaging with Debian is GOOD, and the fixes from Debian were likewise excellent and needed, but some changes there had effects on our wide-spread Qt 5 usage, like the mkspecs directory move
  • xvfb used to run unit tests needed parameters updated in most packages because of OpenGL changes in Qt
  • arm64 and ppc64el were late to be added to the landing PPA. Fixing those archs up was quite a last minute effort and needed to continue after landing by the porters. On the plus side, with Qt 5.2's V4 working on those archs unlike Qt 5.0's V8 based Qt Declarative, a majority of Unity 8 dependencies are now already available for 64-bit ARM and PowerPC!
  • While Qt was being prepared the 100 other packages kept on changing, and I needed to keep on top of all of it, especially during the final landing phase that lasted for two weeks. During it, there was no total control of "locking" packages into Qt 5.2 transition, so for the 20+ manual uploads I simply needed to keep track of whether something changed in the distribution and accommodate.
One issue related to the last one was that some things needed were in progress at the time. There was no support for automated AP test running using a PPA. There was also no support on building images. If migration to Ubuntu Touch landing process (CI Train, a middle point on the way to CI Airlines) had been completed for all the packages earlier, handling the locking would have been clearer, and the "trunk passes all integration tests too" would have prevented "trunk seemingly got broken" situations I ended up since I was using bzr trunks everywhere.

Qt 5.3?

We are near to having a promoted Ubuntu image for the mobile users using Qt 5.2, if no new issues pop up. Ubuntu 14.04 LTS will be released in a month to the joy of desktop and mobile users alike.

It was discussed during the vUDS that Qt 5.3.x would be likely Qt version for the next cycle, to be on the more conservative side this time. It's not entirely wrong to say we should have migrated to Qt 5.1 in the beginning of this cycle and only consider 5.2. With 5.0 in use with known issues, we almost had to switch to 5.2.

Kubuntu will join the Qt 5 users next cycle, so it's no longer only Ubuntu deciding the version of Qt. Hopefully there can be a joint agreement, but in the worst case Ubuntu will need a separate Qt version packaged.

by Timo Jyrinki (noreply@blogger.com) at March 19, 2014 07:42 AM

March 18, 2014

Henri Bergius

Building an Ingress Table with Flowhub

The c-base space station — a culture carbonite and a hackerspace — is the focal point of Berlin's thriving tech scene. It is also the place where many of the city's Ingress agents converge after an evening of hectic raiding or farming.

An Ingress event at c-base

In February we came with an idea on combining our dual passions of open source software and Ingress in a new way. Jon Nordby from Bitraf hackerspace in Oslo had recently shown off the new full-stack development capabilities of Flowhub made possible by integrating my NoFlo flow-based programming framework for JavaScript and his MicroFlo giving similar abilities to microcontroller programming. So why not use them to build something awesome?

Since Flowhub is nearing a public beta, this would also give us a way to showcase some of the possibilities, as well as stress-test Flow-Based Programming in a Internet-connected hardware project. Often hackerspace projects tend to stretch from months to infinity; our experiences with NoFlo and flying drones already showed that with FBP we can easily parallelize development, challenging some of the central dogmas of the Mythical Man Month. It was worth a try to see if this would allow us to compress the time needed for such a project from a couple of months to a long weekend.

Introducing the Ingress Table

Before the actual hackathon we had two meetings with the project team. There were many decisions to be made, starting from the size and shape of the table to the features it should have. Looking at the different tables in the c-base main hall we settled on a square table of slightly less than 1m2, as that would fit nicely in the area, and still seat the magical number of eight Ingress agents or other c-base regulars.

The tabletop would be a map of c-base and the surrounding area, and it would show the status of the portals nearby, as well as alert people sitting at it of attacks and other Ingress events of interest. Essentially, it'd be a physical world equivalent of the Intel Map.

Intel Map of the area

We considered integrating a regular screen to have maximum flexibility in the face of the changing world of Ingress, but eventually decided that most people at c-base already spend much of their waking hours looking at a screen, and so we'd do something more ambient and just use a set of physical lights.

Exploded viewAssembled view

The hardware and software also needed some thought, especially since some of the parts needed might have long shipping times. Eventually we settled on the combination of a BeagleBone Black ARM computer as the brains of the system, and a LaunchPad Tiva as the microcontroller running the hardware. The computer would run NoFlo on Linux, and we'd flash the microcontroller with MicroFlo.

Our BeagleBone Black

By the time of arriving to c-base, many Ingress agents have their phones and battery packs depleted, and so we incorporated eight USB power ports into the table design. Simply plug in your own cable and you can charge your device while enjoying the beer and the chat.

Once the plans had been set, a flurry of preparations began. We would need lots of things, ranging from wood and glass parts for the table shell, to various different electronics and computer parts for the insides. And some of these would have to be ordered from China. Would they arrive in time?

Design render of the table

I spent the two weeks before the hackathon doing a project in Florence, and it was quite interesting to coordinate the logistics remotely. Thankfully our Berlin team did a stellar job of tracking missing shipments and collecting the things we needed!

The hackathon

I landed in Berlin in the early evening of Friday, March 14th. After negotiating the rush hour public transport of the Tegel airport, I arrived to the space station to see most of our team already there, unpacking and getting the supplies ready for the hackathon.

Buying the materials

At this point we essentially had only the raw materials available. Planks of wood, plates of glass and plastic. And a lot of electronics components. No assembly had yet been done, and no lines of code had been written or graphs drawn for the project.

We quickly organized the hackathon into three tracks: hardware, software, and electronics. The hardware team got themselves busy building the table shell, as that would need to be finished early so that the paint would have time to dry before we'd start assembling the electronics into it. Over the next day they'd often call the other teams over to help in holding or moving things, and also for the very important task of test-sitting the table to figure out the optimal trade-off between table height and legroom.

Legroom measurementsLegroom measurements

While the hardware guys were working, we started designing the software part of it. Some basic decisions had to be taken on how we'd get the data, and how we would filter and transform the raw portal statuses to commands to the actual lights in the table.

Eventually we settled on a NoFlo graph that would poll the portal data in, and run it through a set of transformations to find the detect the data points of interest, like portals that have changed owners or are under attack. In parallel we would run some animation loops to create a more organic, shifting feel to the whole map by having the light shining through the streets be constantly shifting and moving.

The main Ingress Table NoFlo graph

(and yes, the graph you see above is the actuall running code of the table)

Software team at robolabSoftware team at robolab

Since the electronics wouldn't be working for a while still, we decided to build also a Ingress Table Emulator in HTML and NoFlo. This would give us something to test the data and our graphs while the other teams where still working on their things. This proved to be a very useful thing, as this way we were able to watch a big Ingress battle through our simulated blinking lights already in the Saturday evening, and see our emulated table go through pretty much all the different states we were interested in.

The software team at workThe software team at work

Once the table shell had been built and the paint was drying, the hardware team started preparing the other things like the map layer, the glass top, and the USB chargers.

Watching the paint dryAttaching the map sticker

For electronics we noticed that we had still some parts missing from the inventory, and so I had to do a quick supply run on Saturday. But once we got those, the team got into calculations and soldering.

Electronics workElectronics work

Every project has its setbacks, and in this case it came in the form of running pre-released software. It turned out that the LaunchPad port of MicroFlo still had some issues, and so most of Sunday was spent debugging the communications protocol and tuning the components. But the end result is a much better improved MicroFlo, and eventually we got the major moment of triumph of seeing the street lights start animating for the first time. LED strips controlled by a LaunchPad Tiva, in turn controlled by animation loops running in a NoFlo graph on Node.js.

Food timeFiguring out communications problems

On Monday evening we convened at c-base for the final push. Street lights were ready, but there were still some issues with getting the table connected wirelessly to the space station network. And we would still need to implement the MicroFlo component for the portal lights. The latter resulting in an epic parallel programming and debugging session between Jon in Norway and Uwe in Berlin. But by the end of the evening we were able to test the full system for the first time, and carry the table to its new home.

Testing the lightsThe table running in the main hall

It was time to celebrate. For an Ingress table, this meant sitting around the table enjoying cold beers, while hacking a level 8 blue portal and watching the lights change across the board as agents ventured out.

Ingress Table in production

(We're still in the process of collecting media about the project. The table will look a lot more awesome in video, and I hope I'll be able to add some of those to this post soon)

Moving ahead

Having the first running version of the table is of course a big milestone. Now we should monitor it for some time (over beer, of course) and make adjustments as necessary. There are some things that obviously need to be changed with the brightness of the lights based on the location of the table in the main hall. And of course we'll only know about the full system's robustness once it has a bit more mileage.

Since we already have a HTML emulator of the table, it might be fun to release that to the public at some point. That way agents who are not at the c-base main hall could also see what is going on with this simple interface.

An interesting area of development is also to see how the table could integrate better with the rest of the space station. There are various screens ranging from the awesome Mate Light to smaller screens and gauges everywhere. And all of that is pretty much networked and available. Maybe we could visualize some events of interest in other parts of the station. This shows of the "Internet of Things" is never finished.

So far Niantic Labs — the makers of Ingress — have limited the availability of a portal data API to few selected parties, and so for now we had to work with a third-party to get the information needed. We hope this table will be another step in convincing Niantic of the creative potential that an official, open Ingress API would unleash.

I'd like to give big thanks especially to everybody who participated in hackathon — whether on location or remotely from Oslo — as well as to those who were cheering us on. I'm also grateful to Flowhub for sponsoring the project. And of course to c-base for being an awesome place where such things can happen.

The full source code for the Ingress Table can be found from https://github.com/c-base/ingress-table

Flowhub - Make code playful

by Henri Bergius (henri.bergius@iki.fi) at March 18, 2014 12:00 AM

March 03, 2014

Niklas Laxström

Numbers on translatewiki.net sign-up process

Translatewiki.net features a good user experience for non-technical translators. A crucial or even critical component is signing up. An unrelated data collection for my PhD studies inspired me to get some data on the translatewiki.net user registration process. I will present the results below.

History

At translatewiki.net the process of becoming an approved translator has been, arguably, complicated in some periods.

In the early days of the wiki, permissions were not clearly separated: hundreds users were just given the full set of permissions to edit the MediaWiki namespace and translate that way.

Later, we required people to go through hoops of various kind after registering to be approved as translators. They had to create a user page with certain elements and post a request on a separate page and they would not get notifications when they were approved unless they tweaked their preferences.

At some point, we started using the LiquidThreads extension: now the users could get notifications when approved, at least in theory. That brought its own set of issues though: many people thought that the LiquidThreads search box on the requests page was the place where to write the title of their request. After entering a title, they ended up in a search results page, which was a dead end. This usability issue was so annoying and common that I completely removed the search field from LiquidThreads.
In early 2010 we implemented a special page wizard (FirstSteps) to guide users though the process. For years, this has allowed new users to get approved, and start translating, in few clicks and a handful hours after registering.

In late 2013 we enabled the new main page containing a sign-up form. Using that form, translators can create an account in a sandbox environment. Accounts created this way are normal user accounts except that they can only make example translations to get a feel of the system. Example translations give site administrators some hints on whether to approve or reject the request and approve the user as a translator.

Data collection

The data we have is not ideal.

  • For example, it is impossible to say what’s our conversion rate from users visiting the main page to actual translators.
  • A lot of noise is added by spam bots which create user accounts, even though we have a CAPTCHA.
  • When we go far back in the history, the data gets unreliable or completely missing.
    • We only have dates for account created after 2006 or so.
    • The log entry format for user permissions has changed multiple times, so the promotion times are missing or even incorrect for many entries until a few years back.

The data collection was made with two scripts I wrote for this purpose. The first script produces a tab separated file (tsv) containing all accounts which have been created. Each line has the following fields:

  1. username,
  2. time of account creation,
  3. number of edits,
  4. whether the user was approved as translator,
  5. time of approval and
  6. whether they used the regular sign-up process or the sandbox.

Some of the fields may be empty because the script was unable to find the data. User accounts for which we do not have account creation time are not listed. I chose not to try some methods which can be used to approximate the account creation time, because the data in that much past is too unreliable to be useful.

The first script takes a couple of minutes to run at translatewiki.net, so I split further processing to a separate script to avoid doing the slow data fetching many times. The second script calculates a few additional values like average and median time for approval and aggregates the data per month.

The data also includes translators who signed up through the sandbox, but got rejected: this information is important for approval rate calculation. For them, we do not know the exact registration date, but we use the time they were rejected instead. This has a small impact on monthly numbers, if a translator registers in one month and gets rejected in a later month. If the script is run again later, numbers for previous months might be somewhat different. For approval times there is no such issue.

Results

Account creations and approved translators at translatewiki.net

Image 1: Account creations and approved translators at translatewiki.net

Image 1 displays all account creations at translatewiki.net as described above, simply grouped by their month of account creation.

We can see that approval rate has gone down over time. I assume this is caused by spam bot accounts. We did not exclude them hence we cannot tell whether the approval rate has gone up or down for human users.

We can also see that the number of approved translators who later turn out to be prolific translators has stayed pretty much constant each month. A prolific translator is an approved translator who has made at least 100 edits. The edits can be from any point of time, the script is just looking at current edit count so the graph above doesn’t say anything about wiki activity at any point in time.

There is an inherent bias towards old users for two reasons. First, at the beginning translators were basically invited to a new tool from existing methods they used, so they were likely to continue to translate with the new tool. Second, new users have had less time to reach 100 edits. On the other hand, we can see that a dozen translators even in the past few months have already made over 100 edits.

I have collected some important events below, which I will then compare against the chart.

  • 2009: Translation rallies in August and December.
  • 2010-02: The special page to assist in filing translator requests was enabled.
  • 2010-04: We created a new (now old) main page.
  • 2010-10: Translation rally.
  • 2011: Translation rallies in April, September and December.
  • 2012: Translation rallies in August and December.
  • 2013-12: The sandbox sign-up process was enabled.

There is an increase in account creations and approved translators a few months after the assisting special page was enabled. The explanation of this is likely to be the new main page which had a big green button to access the special page. The September translation rally in 2011 seems to be very successful in requiting new translators, but also the other rallies are visible in the chart.

Image 2: How long it takes for account creation to be approved.

Image 2: How long it takes for account creation to be approved.

The second image shows how long it takes from the account creation for a site administrator to approve the request. Before sandbox, users had to submit a request to become translators on their own: the time for them to do so is out of control of the site administrators. With sandbox, that is much less the case, as users get either approved or rejected in a couple of days. Let me give an overview of how the sandbox works.

All users in the sandbox are listed on a special page together with the sandbox translations they have made. The administrators can then approve or reject the users. Administrators usually wait until the user has made a handful translations. Administrators can also send email reminders for the users to make more translations. If translators do not provide translations within some time, or the translations are very bad, they will get rejected. Otherwise they will be approved and can immediately start using the full translation interface.

We can see that the median approval time is just a couple of hours! The average time varies wildly though. I am not completely sure why, but I have two guesses.
First, some very old user accounts have reactivated after being dormant for months or years and have finally requested translator rights. Even one of these can skew the average significantly. On a quick inspection of the data, this seems plausible.
Second, originally we made all translators site administrators. At some point, we introduced the translator user group, and existing translators have gradually been getting this new permission as they returned to the site. The script only counts the time when they were added to the translator group.
Alternatively, the script may have a bug and return wrong times. However, that should not be the case for recent years because the log format has been stable for a while. In any case, the averages are so big as to be useless before the year 2012, so I completely left them out of the graph.

The sandbox has been in use only for a few months. For January and February 2014, the approval rate has been slightly over 50%. If a significant portion of rejected users are not spam bots, there might be a reason for concern.

Suggested action points

  1. Store the original account creation date and “sandbox edit count” for rejected users.
  2. Investigate the high rejection rate. We can ask the site administrator why about a half of the new users are rejected. Perhaps we can also have “mark as spam” action to get insight whether we get a lot of spam. Event logging could also be used, to get more insight on the points of the process where users get stuck.

Source material

Scripts are in Gerrit. Version ‘2’ of the scripts was used for this blog post. Processed data is in a Libre Office spreadsheet. Original and updated data is available on request, please email me.

by Niklas Laxström at March 03, 2014 04:46 PM

February 06, 2014

Henri Bergius

Full-Stack Flow-Based Programming

The idea of Full-Stack Development is quite popular at the moment — building things that run both the browser and the server side of web development, usually utilizing similar languages and frameworks.

With Flow-Based Programming and the emerging Flowhub ecosystem, we can take this even further. Thanks to the FBP network protocol we can build and monitor graphs spanning multiple devices and flow-based environments.

Jon Nordby gave a Flow-Based Programming talk in FOSDEM Internet of Things track last weekend. His demo was running a FBP network comprising of three different environments that talk together. You can find the talk online.

Here are some screenshots of the different graphs.

MicroFlo running on an Arduino Microcontroller and monitoring a temperature sensor:

MicroFlo on Arduino

NoFlo running on Node.js and communicating with the Arduino over a serial port:

NoFlo on Node.js

NoFlo running in browser and communicating with the Node.js process over WebSockets:

NoFlo on browser

(click to see the full-size picture)

Taking this further

While this setup already works, as you can see the three graphs are still treated separately. The next obvious step will be to utilize the subgraph features of NoFlo UI and allow different nodes of a graph represent different runtime environments.

This way you could introspect the data passing through all the wires in a single UI window, and "zoom in" to see each individual part of the system.

The FBP ecosystem is growing all the time, with different runtimes popping up for different languages and use cases. While NoFlo's JavaScript focus makes it part of the Universal Runtime, there are many valid scenarios where other runtimes would be useful, especially on mobile, embedded, and desktop.

Work to be done

Interoperability between them is an area we should focus on. The network protocol needs more scrutiny to ensure all scenarios are covered, and more of the FBP/dataflow systems need to integrate it.

Some steps are already being taken in this direction. After Jon's session in FOSDEM we had a nice meetup discussing better integration between MicroFlo on microcontrollers, NoFlo on browser and server, and Lionel Landwerlin's work on porting NoFlo to the GNOME desktop.

Full-stack FBP discussions at FOSDEM 2014

If you're interested in collaborating, please get in touch!

Photo by Forrest Oliphant.

by Henri Bergius (henri.bergius@iki.fi) at February 06, 2014 12:00 AM

January 08, 2014

Niklas Laxström

First day at work

Officially I started January 1st, but apart from getting an account today was the first real thing at the university. Still feels great – the “oh my what did I sign up to” feeling has still time to come. ;)

After having the WMF daily standup, I have a usual breakfast and head to city center, where our research group of four had a meeting. To my surprise, the eduroam network worked immediately. I had configured it at home earlier based on a guide on the site of some university of Switzerland, if I remember correctly: my university didn’t provide good help for how to set it up with Fedora and KDE.

Institute of Behavioural Sciences, University of Helsinki

The building on the left is part of Institute of Behavioural Sciences. It is just next to the building (not visible) where I started my university studies in 2005. (Photo CC BY-NC-ND by Irmeli Aro.)

On my side, preparations for the IWSDS conference are now the highest priority. I have until Monday to prepare my first ever poster presentation. I found PowerPoint and InDesign templates from the university’s website (ugh proprietary tools). Then there are few days to get it printed before I fly on Thursday. After the travel I will make a website for the project to allow it to get some visibility and find out about the next steps as well as how to proceed with studies.

After this topic, I got to hear about other part of the research, collection of data in Sami languages. I connected them with Wikimedia Suomi who has expressed interest to work with Sami people.

After the meeting, we went hunting for so-called WBS codes which are needed in various places to target the expenses, for example for poster printing and travel plans. (In case someone knows where the abbreviation WBS comes from, there are at least two people in the world who are interested to know.) The people I met there were all very friendly and helpful.

On my way home I met an old friend from Päivölä&university (Mui Jouni!) in the metro. There was also a surprise ticket inspection – 25% inspection rate for my trips this year based on 4 observations. I guess I need more observations before this is statistically significant ;)

One task left for me when I got home was to do the mandatory travel plan. This needs to be done through university’s travel management software, which is not directly accessible. After trying without success to access it first through their web based VPN proxy, second with openvpn via NetworkManager via “some random KDE GUI for that” on my laptop and, third, even with a proprietary VPN application on my Android phone I gave up for today – it’s likely that the VPN connection itself is not the problem and the issue is somewhere else.

It’s still not known from where I will get a room (I’m employed in a different department from where I’m doing my PhD). Though I will likely work from home often as I am used to.

by Niklas Laxström at January 08, 2014 09:10 PM

January 05, 2014

Niklas Laxström

MediaWiki i18n explained: {{PLURAL}}

This post explains how MediaWiki handles plural rules to developers who need to work with it. In other words, how a string like “This wiki {{PLURAL:$1|0=does not have any pages|has one page|has $1 pages}}” becomes “This wiki has 425 pages”.

Rules

As mentioned before we have adopted a data-based approach. Our plural rules come from Unicode CLDR (Common Locale Data repository) in XML format and are stored in languages/data/plurals.xml. These rules are supplemented by local overrides in languages/data/plurals-mediawiki.xml for languages not supported by CLDR or where we are yet to unify our existing local rules to match CLDR rules.

As a short recap, translators handle plurals by writing all possible different forms explicitly. That means that there are different forms for singular, dual, plural, etc., depending on what grammatical numbers the language has. There might be more forms because of other grammatical reasons, for example in Russian the grammatical case of the noun varies depending on the number. The rules from CLDR put all numbers into different boxes, each box corresponding to one form provided by the translator.

Preprocessing

The plural rules are stored in localisation cache (not to be confused with message cache and many other caches in MediaWiki) with other language specific localisation data. The localisation cache can be stored in different places depending on configuration. The default is to use the SQL database, but they can also be in CDB files as they are at the Wikimedia Foundation and translatewiki.net.

The whole process starts 1) when the user runs php maintenance/rebuildLocalisationCache.php, or 2) during a web request, if the cache is stale and automatic cache rebuild is allowed (as by default).

The code proceeds as follows:

LocalisationCache::readSourceFilesAndRegisterDeps

  • LocalisationCache::getPluralRules fills pluralRules
    • LocalisationCache::loadPluralFiles loads both xml files, merges them and stores the result in in-process cache
  • LocalisationCache::getComplisedPluralRules fills compiledPluralRules
    • LocalisationCache::loadPluralFiles returns rules from in-process cache
    • CLDRPluralRuleEvaluator::compile compiles the standard notation into RPN notation
  • LocalisationCache::getPluralTypes fills pluralRuleTypes

So now for the given language we have three lists (see table 1). The pluralRules are used in frontend (JavaScript) and the compiledPluralRules are used in the backend (PHP) with a custom evaluator. Tim Starling wrote the custom evaluator for performance reasons. The pluralRuleTypes stores the map between numerical indexes and CLDR keywords, which are not used in MediaWiki plural syntax. Please note that Russian has four plural forms: the fourth form, called other, is used when none of the other rules match and is not stored anywhere.

Table 1: Stored plural data for Russian
pluralRuleTypes pluralRules compiledPluralRules
“one” “n mod 10 is 1 and n mod 100 is not 11″ “n 10 mod 1 is n 100 mod 11 is-not and”
“few” “n mod 10 in 2..4 and n mod 100 not in 12..14″ “n 10 mod 2 4 .. in n 100 mod 12 14 .. not-in and”
“many” “n mod 10 is 0 or n mod 10 in 5..9 or n mod 100 in 11..14″ “n 10 mod 0 is n 10 mod 5 9 .. in or n 100 mod 11 14 .. in or”

The cache also stores the magic word PLURAL, defined in languages/messages/MessageEn.php and translated to other languages, so in Finnish language wikis they can use {{MONIKKO:$1|$1 talo|$1 taloa}} if they so want. For compatibility reasons, in all interface translations these magic words are used in English.

Invocation on backend

There are roughly three ways to trigger plural parsing:

  1. using the plural syntax in a wiki page,
  2. calling the plural parser with Message object with output format text,
  3. using the plural syntax in a message with output format parse, which calls full wikitext parser as in 1.

In all cases, we will get into Parser::replaceVariables, which expands all magic words and templates (anything enclosed in double braces; sometimes also called {{ constructs). It will load the possible translated magic words and see if the {{thing}} in the wikitext or message matches a known magic word. If not, the {{thing}} is considered a template call. If the plural magic word matches, the parser will call CoreParserFunctions::plural which will take the arguments, make them into an array, call the correct language object with Language::convertPlural( number, forms ): see table 2 for function call trace.

In the Language class we first handle explicit plural forms explained in a previous post on explicit zero and one form. If any explicit plural form doesn’t match, they are removed and we will continue on with the other forms, calling Language::getPluralRuleIndexNumber( number ), which first loads the compiled plural rules into the in-process cache, then calls CLDRPluralRuleEvaluator::evaluateCompiled which returns the box the number belongs to. Finally we take the matching form given by the translator, or the last form provided. Then the return value is substituted in place of the plural magic word call.

Table 2: Function call list for plural magic word
Message::parse Message::text
  • Message::toString
  • Message::parseText
  • MessageCache::parse
  • Parser::parse
  • Parser::internalParse
  • Message::toString
  • Message::transformText
  • MessageCache::transform
  • Parser::transformMsg
  • Parser::preprocess
  • [The above lists converge here]
  • Parser::replaceVariables
  • PPFrame_DOM::expand
  • Parser::braceSubstitution
  • Parser::callParserFunction
  • call_user_func_array
  • CoreParserFunctions::plural
  • Language::convertPlural
  • [Plural rule evaluation]

Invocation on frontend

The resource loader module mediawiki.language.data (implemented in class ResourceLoaderLanguageDataModule) is responsible for loading the plural rules from localisation cache and delivering them together with other language data to JavaScript.

The resource loader module mediawiki.jqueryMsg provides yet another limited wikitext parser which can handle plural, links and few other things. The module mediawiki (global mediaWiki, usually aliased to mw) provides the messaging interface with functions like mw.msg() or mw.message().text(). Those will not handle plural without the aforementioned mediawiki.jqueryMsg module. Translated magic words are not supported at the frontend.

If a plural magic word is found, then it will call the frontend convertPlural method. These are provided in few hops by the module mediawiki.language which depends on mediawiki.language.data and mediawiki.cldr. The latter depends on mediawiki.libs.pluralruleparser, which evaluates the (non-compiled) CLDR plural rules to reach the same result as in the PHP side and is hosted at GitHub, written by Santhosh Thottingal of the Wikimedia Language Engineering team.

by Niklas Laxström at January 05, 2014 08:46 PM

December 18, 2013

Ubuntu-blogi

Kolme syytä liittyä Suomen avointen tietojärjestelmien keskus COSSiin

Kuka valvoo Ubuntun ja muiden avoimen lähdekoodin ohjelmistojen käyttäjien etuja? Mikä järjestö on pääasiallinen avoimen lähdekoodin markkinoija ja edistäjä Suomessa? Vastaus on COSS ry.

Kolme syytä tukea COSSia

COSS logo

  1. COSS lisää tietoisuutta avoimesta lähdekoodista, erityisesti julkishallinnossa
  2. COSS edistää suomalaisen IT-alan kasvua ja työllisyyttä kiiihdyttämällä suomalaislähtöisen teknologian menestystä
  3. COSS lisää avoimen lähdekoodin osaamista koulutuksilla, tapahtumillla ja verkostoitumismahdollisuuksilla

Liity COSS ry:n kannatusjäseneksi! →

Mikä on COSS?

Suomen avoimien tietojärjestelmien keskus – COSS ry on voittoa tavoittelematon yhdistys, joka toimii avoimen lähdekoodin, avoimen datan, avoimien rajapintojen sekä avoimien standardien edistämiseksi.

avoimuus
COSS on toiminut jo vuodesta 2003 ja se tunnetaan kansainvälisesti yhtenä Euroopan vanhimmista ja aktiivisimmista avoimuuden keskuksista.

COSS edistää yhteistyötä sekä yhteisöjen, yritysten että julkishallinnon välillä ja mm. järjestää tapahtumia. COSSin sivuilta löytyy valtakunnallinen kalenteri alan kaikista tapahtumista: http://coss.fi/kalenteri/

Yhdistys työskentelee tiedottamalla ja valistamalla avoimuuden periaatteista, -käytänteistä ja -teknologioista. COSS.fi on Suomen suurin alan sivusto.

Esimerkkejä COSSin toiminnasta

  • Tukee julkishallintoa kaikissa tietojärjestelmien avoimuutta edistävissä pyrkimyksissä
  • Edistää avoimen lähdekoodin ratkaisuja, palveluja ja yritystoimintaa
    • Tilaisuuksien järjestäminen ja tukeminen
    • Tiedottaminen verkossa ja muissa medioissa
    • Aktiivisen yritysverkoston ylläpitäminen: COSSin jäseninä n. 100 suomalaista avoimen lähdekoodin yritystä
  • Edistää yritysten, tutkimuslaitosten ja korkeakoulujen välistä yhteistyötä
  • Edistää yritysten ja kehittäjäyhteistöjen välistä yhteistoimintaa
    • Lokalisointi-työryhmä suomentaa ohjelmistoja
    • Linux-tapahtumapäivien järjestäminen
    • Devaamo Summit -tapahtuman tukeminen
  • Ylläpitää yhteistyötä alan suomalaisten ja kansainvälisten järjestöjen ja yhteisöjen välillä
    • KDE-kehittäjien Akademy 2010 -tapahtuman järjestäminen Tampereella
    • Yhteistyö Linux Foundationin, Free Software Foundation Europen ja monen muun kanssa
  • Edistää avointa lähdekoodia, avoimia standardeja, avoimia rajapintoja ja avointa dataa
  • Jakaa vuosittain Open World Hero -palkinnon

Liity COSS ry:n kannatusjäseneksi! →

Autathan COSSia saamaan lisää tukijoita jakamalla tätä viestiä sosiaalisessa mediassa!

by Otto at December 18, 2013 01:29 PM

November 27, 2013

Losca

Jolla launch party

And then for something completely different, I've my hands on Jolla now, and it's beautiful!



A quick dmesg of course is among first things to do...
[    0.000000] Booting Linux on physical CPU 0
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version 3.4.0.20131115.2 (abuild@es-17-21) (gcc version 4.6.4 20130412 (Mer 4.6.4-1) (Linaro GCC 4.6-2013.05) ) #1 SMP PREEMPT Mon Nov 18 03:00:49 UTC 2013
[ 0.000000] CPU: ARMv7 Processor [511f04d4] revision 4 (ARMv7), cr=10c5387d
[ 0.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache
[ 0.000000] Machine: QCT MSM8930 CDP
... click for the complete file ...
And what it has eaten: Qt 5.1!
...
qt5-qtconcurrent-5.1.0+git27-1.9.4.armv7hl
qt5-qtcore-5.1.0+git27-1.9.4.armv7hl
qt5-qtdbus-5.1.0+git27-1.9.4.armv7hl
qt5-qtdeclarative-5.1.0+git24-1.10.2.armv7hl
... click for the complete file ...
It was a very nice launch party, thanks to everyone involved.






Update: a few more at my Google+ Jolla launch party gallery

by Timo Jyrinki (noreply@blogger.com) at November 27, 2013 08:10 PM

Workaround for setting Full RGB when Intel driver's Automatic setting does not work

Background

I upgraded from Linux 3.8 to 3.11 among with newer Mesa, X.Org and Intel driver recently and I found a small workaround was needed because of upstream changes.

The upstream change was the Add "Automatic" mode for "Broadcast RGB" property, and defaulting to the Automatic. This is a sensible default, since many (most?) TVs default to the more limited 16-235, and continuing to default to Full from the driver side would mean wrong colors on the TV. I've set my screen to support the full 0-255 range available to not cut the amount of available shades of colors down.

Unfortunately it seems the Automatic setting does not work for my HDMI input, ie blacks become grey since the driver still outputs the more limited range. Maybe there could be something to improve on the driver side, but I'd guess it's more about my 2008 Sony TV actually having a mode that the standard suggests limited range for. I remember the TV did default to limited range, so maybe the EDID data from TV does not change when setting the RGB range to Full.

I hope the Automatic setting works to offer full range on newer screens and the modes they have, but that's probably up to the manufacturers and standards.

Below is an illustration of the correct setting on my Haswell CPU. When the Broadcast RGB is left to its default Automatic setting, the above image is displayed. When set to Full, the image below with deeper blacks is seen instead. I used manual settings on my camera so it's the same exposure.


Workaround

For me the workaround has evolved to the following so far. Create a /etc/X11/Xsession.d/95fullrgb file:
 
if [ "$(/usr/bin/xrandr -q --prop | grep 'Broadcast RGB: Full' | wc -l)" = "0" ] ; then
/usr/bin/xrandr --output HDMI3 --set "Broadcast RGB" "Full"
fi
And since I'm using lightdm, adding the following to /etc/lightdm/lightdm.conf means the flicker only happens once during bootup:

display-setup-script=/etc/X11/Xsession.d/95fullrgb

Important: when using the LightDM setting, enable executable bits (chmod +x) to /etc/X11/Xsession.d/95fullrgb for it to work. Obviously also check your output, for me it was HDMI3.

If there is no situation where it'd set back to "Limited 16:235" setting on its own, the display manager script should be enough and having it in /etc/X11/Xsession.d is redundant and slows login time down. I think for me it maybe went from 2 seconds to 3 seconds since executing xrandr query is not cheap.

Misc

Note that unrelated to Full range usage, the Limited range at the moment behaves incorrectly on Haswell until the patch in bug #71769 is accepted. That means, the blacks are grey in Limited mode even if the screen is also set to Limited.

I'd prefer there would be a kernel parameter for the Broadcast RGB setting, although my Haswell machine does boot so fast I don't get to see too many seconds of wrong colors...

by Timo Jyrinki (noreply@blogger.com) at November 27, 2013 08:50 AM

September 16, 2013

Ubuntu-blogi

Ubuntu-asennustyöpaja torstaina 19.9. Helsingissä

Software Freedom Day (Avointen ohjelmien päivä) on kansainvälinen tapahtuma, jota vietetään 286 tapahtumalla ympäri maailmaa tänä vuonna. Päivän tarkoitus on lisätä avoimen lähdekoodin ohjelmistojen tunnettuutta.

Software Freedom Day 2013Mikä avoin ohjelma?

Avoimella ohjelmalla tarkoitetaan ohjelmaa, jonka lisenssi takaa käyttäjille neljä perusvapautta:

  1. käyttää ohjelmaa rajoituksetta
  2. tutkia ohjelman toimintaa (lähdekoodista)
  3. muokata ohjelmaa ja tehdä uusia versioita
  4. jakaa ohjelmaa edelleen kenelle tahansa

Tunnettuja avoimen lähdekoodin ohjelmistoja ovat mm. Android-käyttöjärjestelmä, Firefox- ja Chromium-nettiselaimet, VLC-mediasoitin ja LibreOffice-toimisto-ohjelmisto. Tutkimusten mukaan, laajasta käytöstä huolimatta, kuitenkin vasta kolmasosa suomalaisista tietää mitä avoin lähdekoodi on.

Tapahtumalla halutaan nostaa ihmisten tietoisuuteen ohjelmistojen alkuperät, kehitysmallit sekä tekijöiden ja kehittäjien intressit. Tapahtumat järjestäjät uskovat, että avoimet ohjelmat ovat sekä eettisesti että teknisesti parempi vaihtoehto kuin suljetut ohjelmat.

Avoin lähdekoodi on myös taloudellisesti merkittävä asia. Esimerkiksi tunnetut verkkopalvelut kuten Google, Facebook ja Twitter toimivat avoimen lähdekoodin palvelinohjelmistojen avulla, eikä palveluita olisi syntynyt ilman avoimia ohjelmistoja.

Avoin lähdekoodi on erityisen ajankohtaista siksi, että se on ainoa keino suojautua vakoilukäyttöön tehtyjä takaporteja vastaan. Lähdekoodin avoimuus mahdollistaa ohjelman toiminnan yksityiskohtaisen tutkimisen.

Varsinkin Suomessa avoimen lähdekoodin tulisi olla tunnettua, onhan moni avointen ohjelmistojen pääkehittäjä suomalainen. Esimerkkeinä mainitaakoon Linus TorvaldsMichael “Monty” Widenius ja Timo Sirainen.

Suomessa kokoonnutaan Helsingissä

Suomessa tapahtumaa vietetään nimellä Avointen ohjelmien päivä, joka on kaikelle yleisölle avoin ja ilmainen tilaisuus torstaina 19.9. klo 17:30 alkaen Helsingissä.

Avointen ohjelmien päivän tilaisuus alkaa esittelyllä mitä avoimet ohjelmat ovat ja mistä niitä löytää. Tämän jälkeen tapahtuma jatkuu asennustyöpajana, jossa paikalla olevat asiantuntijat auttavat ohjelmien asennuksessa yleisön omiin tietokoneisiin. Tarjolla on sekä Ubuntu-asennuksia, että myös muita Linux-jakeluja sekä myös VALO-ohjelmia Windowsiin ja Maciin. Tilaisuus on helpoin tapa tutustua ja päästä käyttämään avoimia ohjelmia.

Tarkempi ohjelma ja ilmoittautumislomake löytyvät COSS.fi:n verkkosivuilta.

Tilaisuuden järjestää yhteistyössä Euroopan vapaiden ohjelmistojen säätiön (FSFE) Suomen tiimi sekä Suomen avointen tietojärjestelmien keskus COSS ry.

Suomen tapahtuman sivut: http://coss.fi/tapahtumat/avoimien-ohjelmien-paiva-2013-software-freedom-day-2013/

Kansainvälisen tapahtuman sivut: http://www.softwarefreedomday.org/

Liity COSSin jäseneksi

Jos haluat tukea tätä sekä muuta toimintaa avoimen lähdekoodin edistämiseksi Suomessa, liity COSSin jäseneksi osoitteessa coss.fi/liity

by Otto at September 16, 2013 12:37 PM

August 15, 2013

Aapo Rantalainen

Tikkupeli ja matematiikkaa.

Kumpi voittaa 7531-tikkupelin? Miksi?
Säännöt kahdelle pelaajalle.
Alkuasetelma: Tikut ovat riveittäin, ensimmäisellä rivillä 7 tikkua, seuraavalla 5, sitten 3 ja viimeisellä 1.

Vuorollaan pelaaja valitsee yhden rivin ja poistaa siltä haluamansa määrän tikkuja. Kuitenkin ainakin yhden. Halutessaan vaikka kaikki (eikä tietenkään enempää kuin mitä rivillä on).

Häviäjä on se pelaaja joka joutuu ottamaan pelilaudalta viimeisen tikun.

Saanko aloitusvuoron vai haluatko sinä aloittaa pelin? Todista.

alkutila

Pysähdy tähän miettimään.

Vastaus alkaa:

Otetaan käyttöön merkintätapa, jossa jokainen pelitilanne kuvataan neljällä numeromerkillä. Koska tikkurivien järjestyksellä ei ole merkitystä, sovitaan että numeromerkit ovat aina suurimmasta pienimpään. Eli tila 2100 = 2010 = 2001 = 1200 = 1020 = 0120 = 0210 = 0201 = 0021. Merkitään näitä kaikkia tiloja 2100:lla
Nyt pelin häviämisehto on:

Määritelmä I
Pelaaja häviää jos hänelle tulee tila 1000.
(Reunahuomautus: Jos jättäisimme nollat kokonaan merkitsemättä, tikkurivien määrä aloituksessa voisi olla jokin muukin kuin neljä.)

Tässä seuraa matemaattinen todistus. ‘Lemma’ on siis ‘apulause’. Koodarit voi ajatella sen funktiokutsuna (älä sotke matematiikan funktioihin). Määrittelen aina ensin lemman, ennen kuin käytän sitä, jotta ei varmasti synny kehäpäätelmiä.

Väitän että aloittaja häviää aina (tämä selviää todistuksen lopusta vasta).
Väite: Kaikille (eli ∀) aloittajan siirroille löytyy (eli ∃) vastustajalta vastine, jolla aloittaja häviää.

Lemma 1110: vuorossa oleva pelaaja häviää, jos hänelle tulee tila 1110.
Todistus: Tekee pelaaja minkä tahansa siirron, niin seuraava tila on 1100.
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 2200: häviää
Todistus: (Pelaaja voi ottaa jommalta kummalta riviltä joko yhden tai kaksi tikkua.)
Voi päätyä tilaan
a) 2100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.
b) 2000
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 2211: häviää
Todistus: Voi päätyä tilaan
a) 2210
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
b) 2111
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
c) 2110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.

Lemma 3210: häviää
Todistus: Voi päätyä tilaan
a) 3200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
b) 3110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
c) 2210
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
d) 2110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
e) 2100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 3300: häviää
Todistus: Voi päätyä tilaan
a) 3200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
b) 3100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.
c) 3000
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 3311: häviää
Todistus: Voi päätyä tilaan
a) 3310
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
b) 3211
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
c) 3111
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
d) 3110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.

Lemma 4400: häviää
Todistus: Voi päätyä tilaan
a) 4300
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
b) 4200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
c) 4100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.
d) 4000
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 4411: häviää
Todistus: Voi päätyä tilaan
a) 4410
Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää.
b) 4311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
c) 4211
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
d) 4111
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
e) 4110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.

Lemma 5500: häviää
Todistus: Voi päätyä tilaan
a) 5400
Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää.
b) 5300
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
c) 5200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
d) 5100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.
e) 5000
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 5410: häviää
Todistus: Voi päätyä tilaan
a) 5400
Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää.
b) 5310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
c) 5210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
d) 5110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
e) 5100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.
f) 4410
Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää.
g) 4310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
h) 4210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
i) 4110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
j) 4100
Josta vastustaja palauttaa 1000. Määritelmän mukaan häviää.

Lemma 5511: häviää
Todistus: Voi päätyä tilaan
a) 5510
Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää.
b) 5411
Josta vastustaja palauttaa 4411. Lemman 4411 mukaan häviää.
c) 5311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
d) 5211
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
e) 5111
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.
f) 5110
Josta vastustaja palauttaa 1110. Lemman 1110 mukaan häviää.

Lemma 6420: häviää
Todistus: Voi päätyä tilaan
a) 6410
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
b) 6400
Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää.
c) 6320
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
d) 6220
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
e) 6210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
f) 6200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
g) 5420
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
h) 4420
Josta vastustaja palauttaa 4400. Lemman 4400 mukaan häviää.
i) 4320
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
j) 4220
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.
k) 4210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
l) 4200
Josta vastustaja palauttaa 2200. Lemman 2200 mukaan häviää.

Lemma 6431: häviää
Todistus: Voi päätyä tilaan
a) 6430
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
b) 6421
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
c) 6411
Josta vastustaja palauttaa 4411. Lemman 4411 mukaan häviää.
d) 6410
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
e) 6331
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
f) 6321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
g) 6311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
h) 6310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
i) 5431
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
j) 4431
Josta vastustaja palauttaa 4411. Lemman 4411 mukaan häviää.
k) 4331
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
l) 4321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
m) 4311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
n) 4310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.

Lemma 6521: häviää
Todistus: Voi päätyä tilaan
a) 6520
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
b) 6511
Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää.
c) 6510
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
d) 6421
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
e) 6321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
f) 6221
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
g) 6211
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
h) 6210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
i) 5521
Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää.
j) 5421
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
k) 5321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
l) 5221
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
m) 5211
Josta vastustaja palauttaa 2211. Lemman 2211 mukaan häviää.
n) 5210
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.

Lemma 6530: häviää
Todistus: Voi päätyä tilaan
a) 6520
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
b) 6510
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
c) 6500
Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää.
d) 6430
Josta vastustaja palauttaa 6420. Lemman 6420 mukaan häviää.
e) 6330
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
f) 6320
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
g) 6310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
h) 6300
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
i) 5530
Josta vastustaja palauttaa 5500. Lemman 5500 mukaan häviää.
j) 5430
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
k) 5330
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.
l) 5320
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
m) 5310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
n) 5300
Josta vastustaja palauttaa 3300. Lemman 3300 mukaan häviää.

VÄITE: Tilanteesta 7531 häviää.
Todistus: Voi päätyä tilaan
a) 7530
Josta vastustaja palauttaa 6530. Lemman 6530 mukaan häviää.
b) 7521
Josta vastustaja palauttaa 6521. Lemman 6521 mukaan häviää.
c) 7511
Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää.
d) 7510
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
e) 7431
Josta vastustaja palauttaa 6431. Lemman 6431 mukaan häviää.
f) 7331
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
g) 7321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
h) 7311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
i) 7301
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
j) 6531
Josta vastustaja palauttaa 6431. Lemman 6431 mukaan häviää.
k) 5531
Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää.
l) 5431
Josta vastustaja palauttaa 5410. Lemman 5410 mukaan häviää.
m) 5331
Josta vastustaja palauttaa 5511. Lemman 5511 mukaan häviää.
n) 5321
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.
o) 5311
Josta vastustaja palauttaa 3311. Lemman 3311 mukaan häviää.
p) 5310
Josta vastustaja palauttaa 3210. Lemman 3210 mukaan häviää.


by Aapo Rantalainen at August 15, 2013 08:30 PM

July 10, 2013

Losca

Latest Compiz gaming update to the Ubuntu 12.04 LTS

A new Compiz window manager performance update reached Ubuntu 12.04 LTS users last week. This completes the earlier [1] [2] enabling of 'unredirected' (compositing disabled) fullscreen gaming and other applications for performance benefits.

The update has two fixes. The first one fixes a compiz CPU usage regression. The second one enables unredirection also for Intel and Nouveau users using the Mesa 9.0.x stack. That means up-to-date installs from 12.04.2 LTS installation media and anyone with original 12.04 LTS installation who has opted in to the 'quantal' package updates of the kernel, X.Org and mesa *)

The new default setting for the unredirection blacklist is shown in the image below (CompizConfig Settings Manager -> General -> OpenGL). It now only blacklists the original Mesa 8.0.x series for nouveau and intel, plus the '9.0' (not a point release).


I did new runs of OpenArena at openbenchmarking.org from a 12.04.2 LTS live USB. For comparison I first had a run with the non-updated Mesa 9.0 from February. I then allowed Ubuntu to upgrade the Mesa to the current 9.0.3, and ran the test with both the previous version of Compiz and the new one released.

12.04.2 LTS    Mesa 9.0   | Mesa 9.0.3 | Mesa 9.0.3
               old Compiz | old Compiz | new Compiz
OpenArena fps    29.63    |   31.90    | 35.03     

Reading into the results, Mesa 9.0.3 seems to have improved the slowdown in the redirected case. That would include normal desktop usage as well. Meanwhile the unredirected performance remains about 10% higher.

*) Packages linux-generic-lts-quantal xserver-xorg-lts-quantal libgl1-mesa-dri-lts-quantal libegl1-mesa-drivers-lts-quantal. 'raring' stack with Mesa 9.1 and kernel 3.8 will be available around the time of 12.04.3 LTS installation media late August.

by Timo Jyrinki (noreply@blogger.com) at July 10, 2013 12:01 PM

May 21, 2013

Losca

Network from laptop to Android device over USB

If you're running an Android device with GNU userland Linux in a chroot and need a full network access over USB cable (so that you can use your laptop/desktop machine's network connection from the device), here's a quick primer on how it can be set up.

When doing Openmoko hacking, one always first plugged in the USB cable and forwarded network, or like I did later forwarded network over Bluetooth. It was mostly because the WiFi was quite unstable with many of the kernels.

I recently found out myself using a chroot on a Nexus 4 without working WiFi, so instead of my usual WiFi usage I needed network over USB... trivial, of course, except that there's Android on the way and I'm a Android newbie. Thanks to ZDmitry on Freenode, I got the bits for the Android part so I got it working.

On device, have eg. data/usb.sh with the following contents.
#!/system/xbin/sh
CHROOT="/data/chroot"

ip addr add 192.168.137.2/30 dev usb0
ip link set usb0 up
ip route delete default
ip route add default via 192.168.137.1;
setprop net.dns1 8.8.8.8
echo 'nameserver 8.8.8.8' >> $CHROOT/run/resolvconf/resolv.conf
On the host, execute the following:
adb shell setprop sys.usb.config rndis,adb
adb shell data/usb.sh
sudo ifconfig usb0 192.168.137.1
sudo iptables -A POSTROUTING -t nat -j MASQUERADE -s 192.168.137.0/24
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
sudo iptables -P FORWARD ACCEPT
This works at least with Ubuntu saucy chroot. The main difference in some other distro might be whether the resolv.conf has moved to /run or not. You should be now all set up to browse / apt-get stuff from the device again.

Update: Clarified that this is to forward the desktop/laptop's network connection to the device so that network is accessible from the device over USB.
Update2, 09/2013: It's also possible to get working on the newer flipped images. Remove the "$CHROOT" from nameserver echoing and it should be fine. With small testing it got somehow reset after a while at which point another run of data/usb.sh on the device restored connection.

by Timo Jyrinki (noreply@blogger.com) at May 21, 2013 12:20 PM

April 23, 2013

Ubuntu-blogi

Kokemuksia Dell XPS 13 Ubuntu-kannettavasta

Dell XPS 13 Ubuntu Edition

Ubuntua kehittävän Canonicalin ja Dellin yhteistyönä tehty Dell XPS 13 Ubuntu Edition on nyt julkaistu uudistettuna versiona, ja se on saatavilla myös Suomesta. Kannettava on saman tyylinen kuin MacBook Air 13, mutta se on sentin kapeampi ilman että näyttö tai näppäimistö olisi pienempi, ja moni asia on tehty paremmin, kuten esimerkiksi käyttöjärjestelmä :)

Nopean testauksen perusteella laite on erittäin hyvä. Nopea Intel i7-prosessori, 8 GB RAM-muistia ja 256 SSD-levy tekee kannettavasta erittäin nopean. Pohjasta löytyy tuuletin mutta se ei tavallisesti lainkaan pyöri, joten kannettava on käytännössä äänetön. Akkukesto on käytöstä riippuen 5-10 tuntia, ja virtalähdekin on niin pieni että se on kätevä kantaa mukana.

Laitteiston Linux-ajurit ovat luonnollisesti erinomaiset ja kaikki toimii kuten esiasennetulta Linux-kannettavalta voi olettaakin. Monet yksityiskohdat kuten taustavalaistu näppäimistö sekä kannettavanan sammutettunakin ollessa toimiva virrallinen USB-portti ja akun varaustason ilmaisin antavat muutenkin laadukkaan oloiselle alumiinista ja kevlarista tehdylle laitteelle lisäpisteitä. Paras ominaisuus on ehkä kuitenkin teräväpiirtonäyttö jonka resoluution on 1920×1080 pikseliä.

Dell XPS 13 Ubuntu Edition ja muita esiasennettuja Ubuntu-koneita voi ostaa tällä hetkellä vain yhdeltä Dellin jälleenmyyjältä suoraan www.teraset.net/linux.php, ja tarjouksen pyytämällä sen voi ostaa myös Pirkanmaan Konttorikone Oy:ltä tai jos on yritysasiakas, myös suoraan Delliltä. Laitetta ei ollut tilattavissa esim. Jimm’s PC:stä tai Gigantista edes tarjousta pyytämällä, mutta toivottavasti näin hyvä laite tulee laajemmin tarjolle.

Lisää valokuva ja yksityiskohtainen esittely löytyy Seravon blogista (englanniksi).

by Otto at April 23, 2013 09:44 AM

March 30, 2013

Jouni Roivas

QGraphicsWidget

Usually it's easy to get things working with Qt (http://qt-project.org), but recently I encoutered an issue when trying to implement simple component derived from QGraphicsWidget. My initial idea was to use QGraphicsItem, so I made this little class:

class TestItem : public QGraphicsItem
{
public:
TestItem(QGraphicsItem *parent=0) : QGraphicsItem(parent) {}
void paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget = 0);
virtual QRectF boundingRect () const;

protected:
virtual void mousePressEvent(QGraphicsSceneMouseEvent *event);
virtual void mouseReleaseEvent(QGraphicsSceneMouseEvent *event);
};

void TestItem::mousePressEvent(QGraphicsSceneMouseEvent *event)
{
qDebug() << __PRETTY_FUNCTION__ << "press";
}

void TestItem::mouseReleaseEvent(QGraphicsSceneMouseEvent *event)
{
qDebug() << __PRETTY_FUNCTION__ << "release";
}

void TestItem::paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget)
{
Q_UNUSED(option)
Q_UNUSED(widget)
painter->fillRect(boundingRect(), QColor(255,0,0,100));
}

QRectF TestItem::boundingRect () const
{
return QRectF(-100, -40, 100, 40);
}
Everything was working like expected, but in order to use a QGraphicsLayout, I wanted to derive that class from QGraphicsWidget. The naive way was to make minimal changes:

class TestWid : public QGraphicsWidget
{
Q_OBJECT
public:
TestWid(QGraphicsItem *parent=0) : QGraphicsWidget(parent) { }
void paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget = 0);
virtual QRectF boundingRect () const;

protected:
virtual void mousePressEvent(QGraphicsSceneMouseEvent *event);
virtual void mouseReleaseEvent(QGraphicsSceneMouseEvent *event);
};

void TestWid::mousePressEvent(QGraphicsSceneMouseEvent *event)
{
qDebug() << __PRETTY_FUNCTION__ << "press";
}

void TestWid::mouseReleaseEvent(QGraphicsSceneMouseEvent *event)
{
qDebug() << __PRETTY_FUNCTION__ << "release";
}

void TestWid::paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget)
{
Q_UNUSED(option)
Q_UNUSED(widget)
painter->fillRect(boundingRect(), QColor(0,0,255,100));
}

QRectF TestWid::boundingRect () const
{
return QRectF(-100, -40, 100, 40);
}

Pretty straightforward, isn't it? It showed and painted things like expected, but I didn't get any mouse events. Wait what?

I spent hours just trying out things and googling this problem. I knew I had this very same issue earlier but didn't remember how I solved it. Until I figured out a very crucial thing, in case of QGraphicsWidget you must NOT implement boundingRect(). Instead use setGeometry for the object.

So the needed changes was to remote the boundingRect() method, and to call setGeometry in TestWid constructor:

setGeometry(QRectF(-100, -40, 100, 40));

After these very tiny little changes I finally got everthing working. That all thing made me really frustrated. Solving this issue didn't cause good feeling, I was just feeling stupid. Sometimes programming is a great waste of time.

by Jouni Roivas (noreply@blogger.com) at March 30, 2013 01:57 PM

August 31, 2012

Jouni Roivas

Adventures in Ubuntu land with Ivy Bridge

Recently I got a Intel Ivy Bridge based laptop. Generally I'm quite satisfied with it. Of course installed latest Ubuntu on it. First problem was EFI boot and BIOS had no other options. Best way to work around it was to use EFI aware grub2. I wanted to keep the preinstalled Windows 7 there for couple of things, so needed dual boot.

After digging around this German links was most relevant and helpful: http://thinkpad-forum.de/threads/123262-EFI-Grub2-Multiboot-HowTo.

In the end all I needed to do was to install Grub2 to EFI boot parition (/dev/sda1 on my case) and create the grub.efi binary under that. Then just copy /boot/grub/grub.cfg under it as well. On BIOS set up new boot label to boot \EFI\grub\grub.efi

After using the system couple of days found out random crashes. The system totally hanged. Finally traced the problem to HD4000 graphics driver: http://partiallysanedeveloper.blogspot.fi/2012/05/ivy-bridge-hd4000-linux-freeze.html

Needed to update Kernel. But which one? After multiple tries, I took the "latest" and "shiniest" one: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.4-precise/. With that kernel I got almost all the functionality and stability I needed.

However one BIG problem: headphones. I got sound normally from the speakers but after plugging in the headphones I got nothing. This problem seems to be on almost all the kernels I tried. Then I somehow figured out a important thing related to this. When I boot with headphone plugged in I got no sound from them. When I boot WITHOUT headphones plugged then they work just fine. Of course I debugged this problem all the time with the headphones plugged in and newer noticed this could be some weird detection problem. Since I kind of found solution for this I didn't bother to google it down. And of course Canonical does not provide support for unsupported kernels. If I remember correctly with the original Ubuntu 12.04 kernel this worked, but the HD4000 problem is on my scale bigger one than remember to boot without plugging anything to the 3.5" jack....

Of course my hopes are on 12.10 and don't want to dig it deeper, just wanted to inform you about this one.

by Jouni Roivas (noreply@blogger.com) at August 31, 2012 07:57 PM

July 04, 2012

Ville-Pekka Vainio

SSD TRIM/discard on Fedora 17 with encypted partitions

I have not blogged for a while, now that I am on summer holiday and got a new laptop I finally have something to blog about. I got a Thinkpad T430 and installed a Samsung SSD 830 myself. The 830 is not actually the best choice for a Linux user because you can only download firmware updates with a Windows tool. The tool does let you make a bootable FreeDOS USB disk with which you can apply the update, so you can use a Windows system to download the update and apply it just fine on a Linux system. The reason I got this SSD is that it is 7 mm in height and fits into the T430 without removing any spacers.

I installed Fedora 17 on the laptop and selected drive encryption in the Anaconda installer. I used ext4 and did not use LVM, I do not think it would be of much use on a laptop. After the installation I discovered that Fedora 17 does not enable SSD TRIM/discard automatically. That is probably a good default, apparently all SSDs do not support it. When you have ext4 partitions encrypted with LUKS as Anaconda does it, you need to change two files and regenerate your initramfs to enable TRIM.

First, edit your /etc/fstab and add discard to each ext4 mount. Here is an example of my root mount:
/dev/mapper/luks-secret-id-here / ext4 defaults,discard 1 1

Second, edit your /etc/crypttab and add allow-discards to each line to allow the dmcrypt layer to pass TRIM requests to the disk. Here is an example:
luks-secret-id-here UUID=uuid-here none allow-discards

You need at least dracut-018-78.git20120622.fc17 for this to work, which you should already have on an up-to-date Fedora 17.

Third, regenerate your initramfs by doing dracut -f. You may want to take a backup of the old initramfs file in /boot but then again, real hackers do not make backups ;) .

Fourth, reboot and check with cryptsetup status luks-secret-id-here and mount that your file systems actually use discard now.

Please note that apparently enabling TRIM on encrypted file systems may reveal unencrypted data.

by Ville-Pekka Vainio at July 04, 2012 06:14 PM

April 29, 2012

Miia Ranta

Viglen MPC-L from Xubuntu 10.04 LTS to Debian stable

With Ubuntu not supplying a kernel suitable for the CPU (a Geode GX2 by National Semiconductors, a 486 buzzing at 399MHz clock rate) of my Viglen MPC-L (the one Duncan documented the installation of Xubuntu in 2010), it was time to look for other alternatives. I wasn’t too keen on the idea of using some random repository to get the suitable kernel for newer version of Ubuntu, so Debian was the next best thing that came to mind.

Friday night, right before heading out to pub with friends, I sat on the couch, armed with a laptop, USB keyboard, RGB cable and a USB memory stick. Trial and error reminded me to

  1. use bittorrent to download the image since our flaky Belkin-powered Wifi cuts off the connection every few minutes and thus corrupts direct downloads, and
  2. do the boot script magic of pnpbios=off noapic acpi=off like with our earlier Xubuntu installation.

In contrast to the experience of installing Xubuntu on the Viglen MPC-L, the Debian installation was easy from here on. The installer seemed to not only detect the needed kernel and install the correct one (Linux wizzle 2.6.32-5-486 #1 Mon Mar 26 04:36:28 UTC 2012 i586 GNU/Linux) but, judging from the success of the first reboot after the installation had finished and a quick look at /boot/grub/grub.cfg, had also set the right boot options automatically. So the basic setup was a *lot* easier than it was with Xubuntu!

Some things that I’ve gotten used to being automatically installed with Ubuntu weren’t pre-installed with Debian and so I had to install them for my usage. Tasksel installed ssh server, but rsync, lshw and ntfs-3g needed to be installed as well which I had gotten used to having in Ubuntu, but installing them wasn’t too much of a chore. As I use my Viglen MPC-L as my main irssi shell nowadays, I had to install of course irssi, but some other stuff needed by it and my other usage patterns… so… after installing apt-file pastebinit zsh fail2ban for my pet peeves, and tmux irssi irssi-scripts libcrypt-blowfish-perl libcrypt-dh-perl libcrypt-openssl-bignum-perl libdbi-perl sqlite3 libdbd-sqlite3-perl I finally have approximately the system I needed.

All in all, the experience was a lot easier than what I had with Xubuntu in September 2010. It definitely surprised me and I kind of hope that this process wasn’t as easy and automated 18 months ago…

by Miia Ranta (Myrtti) at April 29, 2012 10:00 PM

January 27, 2012

Aapo Rantalainen

Nokia Lumia 800 for Linux-developer

I got my Nokia Lumia 800 (Windows 7 -phone) from Nokia, and I consider myself as Linux-developer.

I attached Lumia phone to my computer and nothing happened. Went to discussion forum and learned there are no way to access phone via Linux. End of story (that was not long story).


by Aapo Rantalainen at January 27, 2012 07:18 PM

January 24, 2012

Sakari Bergen

WhiteSpace faces in Emacs 23

This is a good old case of RTFM, but since I spent a couple of hours figuring it out, I thought I’d blog about it anyway…

The WhiteSpace package in Emacs allows you to visualize whitespace in your code. The overall settings of the package are controlled with the ‘whitespace-style’ variable. Before Emacs 23 you didn’t need to include the ‘face’ option to make different faces work. However, since Emacs 23 you need to have it set.

Now I can keep obsessing about whitespace with an up-to-date version of Emacs, and maybe publicly posting stuff like this will help me remember to RTFM in the future also :)

by sbergen at January 24, 2012 05:35 PM

January 09, 2012

Sakari Bergen

Multiuser screen made easy

The idea for this all started with someone mentioning

it’d be good if there was some magic thing which did some SSH voodoo to get you a shell that the person on the other end could watch

So, I took a quick look around and noticed that Screen can already do multiuser sessions, which do exactly this. However, controlling the session requires writing commands to screen, which is both relatively complex for beginners and relatively slow if the remote user is typing in ‘rm -Rf *’ ;)

So, I created a wizard-like python script, which sets up a multiuser screen session and a simple one button GUI (using PyGTK) for allowing and disallowing the remote user access to the session. It also optionally creates a script which makes it easier for the remote user to attach to the session.

Download

Known issues:

  • The helper script creation process for the remote user does not check the user input and runs sudo. Even though the script warns the user, it’s still a potential security risk
  • If the script is terminated unexpectedly, the screen session will stay alive, and will need to be closed manually before this script will work again

Resolving the issues?

Fixing the security issue would be just a matter of more work. However, the lingering screens are a whole different problem: I tried to find out a way to get the pid for the screen session, but failed to find a way to do this in python. This would have made the lingering screen sessions less harmful, as all the communication could have been done with <pid>.<session> instead of simply <session>, which it uses now. The subprocess.Popen object contains the pid of the launched process, but the actual screen session is a child of this process, and thus has a different pid. If anyone can point me toward a solution to this, it’d be greatly appreciated!

by sbergen at January 09, 2012 07:55 PM

January 03, 2012

Sakari Bergen

New site up!

I finally got the work done, and here’s the result! I moved from Dupal to WordPress, as it feels better for my needs. So far I’ve enjoyed it more than Drupal.

I didn’t keep all of the content from my old site: I recreated most of it and added some new content. I also went through links to my site with Google’s Webmaster Tools, and added redirects to urls which are linked to from other sites (and resurrected one blog post).

It’s been a while since I did any PHP, HTML or CSS. I almost got frustrated for a moment, but after reading this article, things progressed much easier. Thanks to the author, Andrew Tetlaw! I was also inspired by David Robillard’s site, which is mostly based on the Barthelme theme. However, I started out with Automattic’s Toolbox theme, customizing most of it.

If you find something that looks or feels strange, please comment!

by sbergen at January 03, 2012 08:57 PM

December 28, 2011

Aapo Rantalainen

Joulun hyväntekeväisyyslahjoituskohteet

Jouluhan on hyvää aikaa lahjoittaa rahaa hyväntekeväisyyteen, eikös juu. Tässä pari vinkkiä niille jotka haluavat helposti PayPalilla osallistua hyvän tekoon.

Wikipedia

Kukapa ei tietäisi Wikipediaa, mutta tietääkö kaikki, että sen takana on aika pieni säätiö. Esim Googlella on noin miljoona palvelinkonetta, Wikipedialla 679. Esim Yahoolla työskentelee 13000 työntekijää, Wikipedialla 95.

https://wikimediafoundation.org/wiki/Donate

 


 

Free Software Foundation

Säätiön nimessä oleva ‘free’ ei tarkoita ilmaista, vaan ‘vapautta’. Ohjelmiston vapaus tarkoittaa:

-lupa käyttää sitä miten tahansa
-lupa tutkia miten se toimii ja kuinka se on tehty
-lupa muuttaa sen toimintaa (eli korjata tai parantaa sitä)
-lupa kopioida sitä toisille, muutettuna tai muuttamattomana

Free Software on aate, joka kehottaa Sinuakin miettimään: “kuvittele maailma, jossa kaikki ohjelmistot ovat vapaita.” Ovatko Sinun käyttämäsi ohjelmat vapaita?

https://my.fsf.org/donate


by Aapo Rantalainen at December 28, 2011 12:59 PM

December 05, 2011

Aapo Rantalainen

MeeGo on ExoPC

Even Ubuntu runs very well on ExoPC (last post), I had promised to return it with MeeGo, so here we go…

Downalod Latest image (meego-tablet-ia32-pinetrail-1.2.0.90.12.20110809.2.img) from http://repo.meego.com/MeeGo/builds/1.2.0.90/1.2.0.90.12.20110809.2/images/meego-tablet-ia32-pinetrail/

Copy to usb-stick and booting ExoPC from stick. Yes,ok,ok,ok,ok and ok. Boot. Ready.

I wanted some challenge, so I decided to compile and run JamMo. Easy as with Ubuntu (upgraded manual). Game uses fixed size window 800×480, so it would be handy to change resolution of ExoPC. Xrandr left black borders to left and right, but touchscreen is still using whole screen (so elements on middle of the screen are accessible normally, but elements near left and right borders are not).

Solution (partial): Add new screen-mode and use it.

run

cvt 840 480

And it gives: “840x480_60.00″   31.25  840 864 944 1048  480 483 493 500 -hsync +vsync

Run (add these to the autorun, they are cleared at every boot):

xrandr --newmode  "840x480_60.00"   31.25  840 864 944 1048  480 483 493 500 -hsync +vsync
xrandr --addmode LVDS1 840x480_60.00

And when you want use that resolution, run

xrandr --output LVDS1 --mode 840x480_60.00

There are still little black, but doesn’t affect usage (width must be multiple of 8, you can test if 848 is better…)

Issues:
*Task switcher is still in old middle of the screen
*Coming back might cause half of the screen be black (this is corrected after screen dim)
*Browser might rotate itself to portrait mode (even it is first started landscaped)


by Aapo Rantalainen at December 05, 2011 01:14 PM

December 01, 2011

Aapo Rantalainen

Ubuntu on ExoPC

I got Intel’s ExoPC on my hands and tested Ubuntu on it.


ExoPC is laptop with touchscreen and without keyboard, somebody would call it ‘tablet’. It is not ARM, but 32/64 bit x86 (Atom). It has 2GB memory and 64GB SSD ‘hard disk’.

When I got it, tt has (old) MeeGo  (http://wiki.meego.com/Devices/ExoPC), which went broke when I tried upgrade. Because it is ~standard PC there are many Linux for it (e.g Suse: http://www.meegoexperts.com/2011/09/experiences-exopc-kde/ )

I installed Ubuntu 10.04.3 (latest long-term-supported Ubuntu). Touchscreen was not working, so I upgraded it three times: -> 10.10 -> 11.04 -> 11.10. I used USB-keyboard (and also ssh-server), but it seems there are no any default virtual keyboard. It is very cool device with lots of potential.

Hardware buttons:
System Settings -> Keyboard -> Shortcuts
(Or in Gnome: System->Preferences->Keyboard Shortcuts)

There are one button back of the device, power button. It is recognized as ‘PowerOff’
There are one button (proximity sensor?) front of the device, ‘orange magic circle’. It is recognized as ‘Audio media’ (XF86Media or XF86AudioMedia depending on Ubuntu version)


Some critics about software:
Multitouch is not working (at least out-of-the-box). I have no time to investigate this more.

 
Screen dims when on battery, even asked to not dim:
https://bugzilla.gnome.org/show_bug.cgi?id=665073

Touchscreen will not be recognized after after some time of use
https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-input-evdev/+bug/897297

Mute/unmute microphone via command line doesn’t work (even toggle is working):
https://bugs.launchpad.net/ubuntu/+source/alsa-utils/+bug/894556


Any use for tablets? Maybe musical game for children? How about JamMo?

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="390" src="http://www.youtube.com/embed/9tVmFOKvfUY?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="640"></iframe>

 


by Aapo Rantalainen at December 01, 2011 06:47 PM

November 06, 2011

Miia Ranta

Ubuntu 11.10 on an ExoPC/Wetab, or how I found some use for my tablet and learnt to hate on-screen keyboards

I attended an event in the spring that ended with a miraculous incident of being given an ExoPC to use. The operating system that it came installed with was a bit painful to use (and I’m not talking about a Microsoft product), so I didn’t find too much use for the device. I flashed it with a new operating system image quite often, only to note that none to few problems were ever fixed in the UI. Since operating system project is pretty much dead now with participants moving to new areas and projects of interest, I decided to bite the bullet and flash my device with the newest Ubuntu.

Installation project requires an USB memory stick made into an installation media with the tools shipped with regular Ubuntu. A keyboard is also nice to have to make installation process feasible in the first place, or at least it makes it much less painful experience. After the system is installed, comes the pain of getting the hardware to play nice. Surprisingly I’ve had no other problems than trying to figure out how to make the device and operating system to realise that I want to scroll or right-click with my fingers instead of a mouse. Almost all the previous instructions I’ve come across involve (at best) Ubuntu 11.04 and a 2.6.x kernel – and the rest fail to give a detailed instruction on how to make the scrolling or right-clicking work with evdev. The whole process is very frustrating, and I still haven’t figured everything out.

Anyway. First thing you notice, especially without the fingerscrolling working, is that the new scrollbars are a royal pain in the hiney. The problem isn’t as bad in places where the problem can be bypassed, like in Chromium with the help of an extension called chromeTouch where the fingerscrolling can be set to work, or in Gnome-shell which actually has a decent sized scrollbar, or uninstalling overlay-scrollbar altogether, which isn’t pretty, but it works.

Exopc The second immediate thing that slaps a cold wet towel on the face is – after you’ve unplugged the USB keyboard – is the virtual keyboards. Ubuntu and its default environment Unity use OnBoard as the default on-screen keyboard. OnBoard is a complete keyboard with (almost) all the keys a normal keyboard would have, but it lacks a few features that are needed on a tablet computer: it lacks automation of hiding and unhiding itself. In addition to this annoyance OnBoard had the tendency of swapping the keyboard layout to what I assume to be either US or British instead of the Finnish one I had set as default on the installation. One huge problem with OnBoard is at least in my use that it ends up being underneath the Unity interface, where it’s next to useless.

I tried to install other virtual keyboards, like Maliit and Florence, but instructions and packages on Oneiric are lacking and anyway, I still don’t know how to change the virtual keyboard from OnBoard to something else. However, the virtual keyboard in a normal Gnome 3 session with Gnome-Shell seems to work more like the virtual keyboards should, but alas, it doesn’t seem to recognize the keyboard layout settings at all and thus I’m stuck to non-Finnish keyboard layout.

However among all these problems Ubuntu 11.10 manages to show great potential with both Unity and Gnome 3. Ubuntu messaging menu is nice, once gmnotify has been installed (as I use Chromium application Offline Gmail as my email client), empathy set up, music application of choice filled with music and browser settings synchronized.

I’ve found that the webcam works perfectly and the video call quality is much better than it has been earlier on my laptop where I’ve resorted into using GMails video call feature, because it Just Works. It’s nice to see that pulseaudio delivers and bluetooth audio works 100% with both empathy video calls and stereo music/video content.

Having read of the plans for future Ubuntu releases from blogposts of people who were attending UDS-P in Orlando this past week, I openly welcome our future tablet overlords. Ubuntu on tablets needs love and it’s nice to know it’s coming up. This all bodes well for my plan to take over the world with Ubuntu tablet, screen, emacs and chromium :-)

by Miia Ranta (Myrtti) at November 06, 2011 12:06 AM

October 29, 2011

Ville-Pekka Vainio

Getting Hauppauge WinTV-Nova-TD-500 working with VDR 1.6.0 and Fedora 16

The Hauppauge WinTV-Nova-TD-500 is a nice dual tuner DVB-T PCI card (well, actually it’s a PCI-USB thing and the system sees it as a USB device). It works out-of-the-box with the upcoming Fedora 16. It needs a firmware, but that’s available by default in the linux-firmware package.

However, when using the Nova-TD-500 with VDR a couple of settings need to be tweaked or the signal will eventually disappear for some reason. The logs (typically /var/log/messages in Fedora) will have something like this in them:
vdr: [pidnumber] PES packet shortened to n bytes (expected: m bytes)
Maybe the drivers or the firmware have a bug which is only triggered by VDR. This problem can be fixed by tweaking VDR’s EPG scanning settings. I’ll post the settings here in case someone is experiencing the same problems. These go into /etc/vdr/setup.conf in Fedora:


EPGBugfixLevel = 0
EPGLinger = 0
EPGScanTimeout = 0

It is my understanding that these settings will disable all EPG scanning which is done in the background and VDR will only scan the EPGs of the channels on the transmitters it is currently tuned to. In Finland, most of the interesting free-to-air channels are on two transmitters and the Nova-TD-500 has two tuners, so in practice this should not cause much problems with outdated EPG data.

by Ville-Pekka Vainio at October 29, 2011 06:07 PM

August 25, 2011

Miia Ranta

Things I learnt about managing people while being a Wikipedia admin

Colour explosion Just over four years ago I gave up my volunteer, unpaid role as an administrator of the Finnish Wikipedia. Today, while discussing with a friend, I realised what has been one of the most valuable lessons in both my professional life and hobbies. While I am quite pessimistic in general, I still benefit from these little nuggets of positive insight almost every day when communicating and working with other people.

  • Assume Good Faith. “Unless there is clear evidence to the contrary, assume that people who work on the project are trying to help it, not hurt it.” Most people aren’t your enemies. Most people will not try to hurt you. If stupidity is abound, it’s (usually) not meant as a personal attack towards you, nor is it intentional.
  • When someone does something that doesn’t immediately make sense, which contradicts your assumptions about the skills and common sense of a person you are dealing with, discuss it with them! Don’t make assumptions based on partial information or details, ask for more info so you don’t need to assume the worst! If something is unclear, asking won’t make things worse.

Pessimists are never disappointed, only positively surprised. But while the world seems like a dark a desolate place and the humanity seems to be doomed, I still have to try to believe in the sensibility of people and that we can make something special for the project we are trying to work for. Ubuntu, Wikipedia, Life… or just your day-to-day job.

by Miia Ranta (Myrtti) at August 25, 2011 11:49 PM

August 21, 2011

Miia Ranta

And then, unexpectedly, life happens

I hope none of you have expected me to blog more often. It’s been over a year since I’ve last blogged, and so much has happened since I last did.

I’ve travelled to Cornwall, started a Facebook page that got a huge following in no time, fiddled a bit with CMS Made Simple at work, bought another Nexus One to replace one that broke and after getting the broke one fixed, gave the extra to my sister as a Christmas present, have taught Duncan how to make gravadlax and crimp Carelian pasties, visited Berlin and bought a game. I’ve attended a few geeky events, like Local MeeGo Network meetings of Tampere, Finland, MeeGo Summit also in Tampere, MeeGo Conference in San Francisco, US and OggCamp’11 in Farnham, UK.

I’ve also taken few steps in learning to code in QML, poked around Arduino and bought a new camera, Olympus Pen E-PL1.

My mom What else has happened? Well, among other things, my mother was diagnosed with cholangiocarcinoma right after New Year, and she passed away 30th of June.

Many things that I have taken for granted have changed or gone away forever. Importance of some things have changed as my life is trying to find a new path to run in.

Blogging and some of my Open Source related activities have taken a toll, which I am planning to fix now that I feel like I’m strong enough to use my energy on these hobbies again. Sorry for the hiatus, folks.

Coming up, perhaps in the near future:

  • Rants and Raves about Arduino
  • Entries about social networking sites
  • Camera/Photography jabber
  • Mobile phone/Tablet chatter

So, just so you know, I’m alive, and will soon be in an RSS feed reader near you. AGAIN.

by Miia Ranta (Myrtti) at August 21, 2011 12:20 AM

August 06, 2011

Ville-Pekka Vainio

The Linux/FLOSS Booth at Assembly Summer 2011

The Assembly Summer 2011 demo party / computer festival is happening this weekend in Helsinki, Finland. The Linux/FLOSS booth here is organized together by Finnish Linux User Group, Ubuntu Finland, MeeGo Network Finland and, of course, Fedora. I’m here representing Fedora as a Fedora Ambassador and handing out Fedora DVDs. Here are a couple of pictures of the booth.

The booth is mostly Ubuntu-coloured because most of the people here are members of Ubuntu Finland and Ubuntu in general has a large community in Finland. In addition to live CDs/DVDs, the MeeGo people also brought two tablets running MeeGo (I think they are both ExoPCs) and a few Nokia N950s. They are also handing out MeeGo t-shirts.

People seem to like the new multi-desktop, multi-architecture live DVDs that the European Ambassadors have produced. I think they are a great idea and worth the extra cost compared to the traditional live CDs.

by Ville-Pekka Vainio at August 06, 2011 11:11 AM

April 29, 2011

Sakari Bergen

Website remake coming up, comments disabled

The format of my current website has not worked very well for me, and I'm a bit lazy with bloggy stuff. So, I decided to remake this website. I've already made a new design, and will probably be switching to Wordpress from Drupal because it's a bit simpler. Hope to get the new site up in a few months latest!

Due to a lot of spam recently, I disabled comments.

by sbergen at April 29, 2011 09:53 AM

March 21, 2011

Jouni Roivas

Wayland

Recently Wayland have become a hot topic. Canonical has announced that Ubuntu will go to Wayland. Also MeeGo has great interest on it.

Qt has had (experimental) Wayland client support for some time now.

A very new thing is support for Qt as Wayland server. With that one can easily make own Qt based Wayland compositor. This is huge. Since this the only working Wayland compositor has been under wayland-demos. Using Qt for this opens many opportunities.

My vision is that Wayland is the future. And the future might be there sooner than you think...

by Jouni Roivas (noreply@blogger.com) at March 21, 2011 09:42 AM

February 24, 2011

Jouni Roivas

January 03, 2011

Ville-Pekka Vainio

Running Linux on a Lenovo Ideapad S12, part 2

Here’s the first post of what seems to be a series of posts now.

acer-wmi

I wrote about acer-wmi being loaded on this netbook to the kernel’s platform-driver-x86 mailing list. That resulted in Chun-Yi Lee writing a patch which adds the S12 to the acer-wmi blacklist. Here’s the bug report.

ideapad-laptop

I did a bit of googling on the ideapad-laptop module and noticed that Ike Panhc had written a series of patches which enable a few more of the Fn keys on the S12. The git repository for those patches is here. Those patches are also in linux-next already.

So, I cloned Linus’ master git tree, applied the acer-wmi patch and then git pulled Ike’s repo. Then I followed these instructions, expect that now Fedora’s sources are in git, so you need to do something like fedpkg co kernel;cd kernel;fedpkg prep and then find the suitable config file for you. Now I have a kernel which works pretty well on this system, except for the scheduling/sleep issue mentioned in the previous post.

by Ville-Pekka Vainio at January 03, 2011 10:19 AM

December 27, 2010

Ville-Pekka Vainio

Running Linux (Fedora) on a Lenovo Ideapad S12

I got a Lenovo Ideapad S12 netbook (the version which has Intel’s CPU and GPU) a few months ago. It requires a couple of quirks to work with Linux, I’ll write about them here, in case they’ll be useful to someone else as well.

Wireless

The netbook has a “Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01)” wifi chip. It works with the “b43″ open source driver, which is in the kernel. However, I think that it may not actually reach the speeds it should. You could also use the proprietary “wl” kernel module, available in RPM Fusion as “kmod-wl”, but I don’t like to use closed source drivers myself.

The b43 driver needs the proprietary firmware from Broadcom to work with the 4312 chip. Following these instructions should get you the firmware.

Kernel

The kernel needs the “nolapic_timer” parameter to work well with the netbook. If that parameter is not used, it seems like the netbook will easily sleep a bit too deep. Initially people thought that the problem was in the “intel_idle” driver, the whole thing is discussed in this bug report. However, according to my testing, the problem with intel_idle was fixed, but the netbook still has problems, they are just a bit more subtle. The netbook boots fine, but when playing music, the system will easily start playing the same sample over and over again, if the keyboard or the mouse are not being used for a while. Apparently the system enters some sort of sleeping state. I built a vanilla kernel without intel_idle and I’m seeing this problem with it as well.

Then there’s “acer-wmi”. The module gets loaded by the kernel and in older versions it was probably somewhat necessary, because it handled the wifi/bluetooth hardware killswitch. It causes problems with NetworkManager, though. It disables the wifi chip on boot and you have to enable wifi from the NetworkManager applet by hand. Here’s my bug report, which hasn’t gotten any attention, but then again, I may have filed it under the wrong component. Anyway, in the 2.6.37 series of kernels there is the “ideapad_laptop” module, which apparently handles the hardware killswitch, so acer-wmi shouldn’t be needed any more and can be blacklisted.

by Ville-Pekka Vainio at December 27, 2010 03:18 PM

November 29, 2010

Jouni Roivas

Encrypted rootfs on MeeGo 1.1 netbook

I promised my scripts to encrypt the rootfs on my Lenovo Ideapad running MeeGo 1.1. It's currently just a dirty hack but thought it could be nice to share it with you.

My scripts uses cryptoloop. Unfortunately MeeGo 1.1 netbook stock kernel didn't support md_crypt so that was a no go. Of course I could compile the module myself but I wanted out-of-the box solution.

Basic idea is to create custom initrd and use it. My solution needs Live USB stick to boot and do the magic. Also another USB drive is needed to get the current root filesystem in safe while encrypting the partition. I don't know if it's possible to encrypt "in place" meaning to use two loopback devices. However this is the safe solution.

For the busy ones, just boot the MeeGo 1.1 Live USB and grab these files:
http://kaaos.huutonauru.net/meego/netbook_rootfs_crypt/crypt_hd.sh
http://kaaos.huutonauru.net/meego/netbook_rootfs_crypt/mkcryptrd.sh

Then:
chmod a+x crypt_hd.sh mkcryptrd.sh
su
./crypt_hd.sh

And follow the instructions.

The ones who have more time and want to double check everything, please follow instructions at: http://kaaos.huutonauru.net/meego/netbook_rootfs_crypt/README

This solution has at least one drawback. Once the kernel updates you have to recreate the initrd. For that purposes I created a tiny script than can be run after kernel update:
http://kaaos.huutonauru.net/meego/netbook_rootfs_crypt/update_initrd.sh

That script also needs the mkcryptrd.sh script above.

Of course that may break your system at any time, so be warned.

For my Lenovo Ideapad S10-3t and MeeGo 1.1 netbook it worked fine. My test case was to make very fresh installation first from the Live/installation USB. Boot again and setup the cryptoloop from Live USB. After that I could easily boot my crypted MeeGo 1.1. It asks password in very early phase of boot process. After it's written correctly the MeeGo 1.1 system should boot up normally.

This worked for me, and I give no guarantee that this works for you. However you're welcome to send patches and improvements.

UPDATE 29.11.2010:
Some people have reported problems when they have different kernel version than on Live USB. The're unable to boot back to their system. I try to figure out solution for this issue.

by Jouni Roivas (noreply@blogger.com) at November 29, 2010 12:48 PM

June 14, 2010

Miia Ranta

California Dreamin’, release 1.2.1 (LCS2010, MeeGo workshop videos)

As promised earlier, I’ve now published four of the sessions from Linux Collaboration Summit 2010 which was held in San Francisco in April. They’re viewable in blip.tv, and I’ve decided to follow the licensing Linux Foundation itself has for the videos of the previous day, so the videos are licensed in CreativeCommons Attribution. I managed to burn a lot of time to edit the videos, but I guess in the end they’re fairly good. The sound quality isn’t magnificent, but most of the time you can tell what is actually said… I’ve not yet uploaded the MeeGo question hour or the panel, because I’m not still quite convinced that the sound quality is good enough. If you want them on blip.tv, please leave a comment.

Quim Gil - A Working Day in MeeGo project

Without further ado, here are the episodes so far:

<3 <3

by Miia Ranta (Myrtti) at June 14, 2010 09:49 PM

April 29, 2010

Matti Saastamoinen

Ubuntu 10.04 LTS ja Tampereen julkaisutapahtuma

Ubuntu 10.04 LTS julkaistaan tänään 29.4. Kyseessä on ns. LTS-julkaisu (Long Term Support), johon tarjotaan maksuttomat tietoturva- ja huoltopäivitykset kolmen vuoden ajan työpöytäversioon ja viiden vuoden ajan palvelinversioon. Ubuntusta julkaistaan kahden vuoden välein tällainen LTS-versio ja puolen vuoden välein kokeellisemmat väliversiot. Ubuntu Suomen julkaisema lehdistötiedote kertoo 10.04-version oleellisimmat uudistukset.

Aina uuden Ubuntun julkaisun yhteydessä ympäri maailmaa järjestetään julkaisutapahtumia. LTS-julkaisut keräävät lisäksi erityishuomiota ja -panostusta. Ubuntu 10.04:n pääjulkaisutapahtuma järjestetään Suomen avoimen lähdekoodin keskus COSSin ja Ixonos Oyj:n toimesta Tampereella 5.5. klo 15-19. Tapahtumapaikkana toimii Finlaysonilla sijaitsevaan Demola. Julkaisutapahtumia järjestetään Tampereen lisäksi varmuudella perjantaina 30.4. Oulussa sekä lauantaina 15.5. Porissa.

Tampereen tapahtuman suosio on yllättänyt varmasti kaikki, myös meidät järjestäjät. Tapahtumaan on ilmoittautunut jo lähes 150 henkilöä, joten yksin ei tarvitse paikalle saapuvien olla! Tapahtuma koostuu muutamasta Ubuntua käsittelevästä esityksestä, ruokailusta ja yleisestä seurustelusta sekä tietysti Ubuntun esittelystä ja ihmettelystä. Paikalle tuodaan läjäpäin Ubuntu 10.04 LTS Finnish Remix -romppuja mukaan otettaviksi ja osallistujien kesken arvotaan vielä Nokia N900.

Ixonos on ollut kiitettävän aktiivinen tapahtuman tukemisessa ja alkuperäinen ideakin Tampereen julkaisutapahtumasta tuli yritykseltä. Tilaisuudessa kuullaan tarkemmin, kuinka Ubuntua hyödynnetään Ixonosissa ja miksi se on tärkeä yritykselle. Ilmoittautuneiden joukossa onkin ilahduttavan paljon eri yritysten edustajia.

Tapahtumaan mahtuu vielä mukaan ja ilmoittautuminen suljetaan pe 30.4. klo 12. Istumapaikkoja tuskin riittää kaikille, joten kannattaa tulla paikalle ajoissa, mikäli haluaa istumaan esitysten ajaksi.

Ohjelma ja ilmoittautuminen sivulla http://www.coss.fi/ubuntufest.

by Matti Saastamoinen at April 29, 2010 04:04 PM