Skip navigation

After the disbanding of the Sun/Oracle OpenOffice team, a sizeable fraction of those developers stayed with the meta-project – some for LibreOffice, employed by SUSE, RedHat and Canonical, some for IBM. Which means, the Hamburg metropolitan area remains one of the gravity centers for Free office suite hacking activities.

Quite accordingly, we’ll be having a local LibreOffice Hamburg Hackfest this spring, generously hosted at the Attraktor e.V. hacker space – the date is not yet, but almost fixed (quickly cast your vote if you want to attend).

Plus, we’ve established a recurring LibreOffice home hacking event, with one of us taking turns having the Hamburg crowd over for a day:

LibreOffice Hackers Meet

Home hacking at my place


Looking forward to grow this circle over the time. :-)

With Oracle donating OpenOffice.org trademark and code to the Apache Foundation, one point frequently made is the one about licensing differences. LibreOffice is under a weak copyleft license, that is, changes to existing core need to be made public (at the time a product ships). In contrast, to-be-Apache OpenOffice would be available under a non-copyleft license, meaning nobody is required to contribute anything back.

It is said that non-copyleft, or permissive, licenses are more popular with corporations, because they allow for much more flexibility in what, and when, to contribute back. Overall, it is conjectured, the projects will still see enough open contributions from corporate participants, because private forks are not cost-effective.

Let’s now have a look at how all that applies to OpenOffice.org. There are a few things to know beforehand. First, the code represents almost 20 years of development, and is, in many places, a sedimentation of bugfixes over bugfixes. Which overall results in highly coupled and fragile code. Secondly, OOo has a mature component framework, API and extension mechanism, that makes it easy for third parties to innovate on top of the existing core.

Given that, it is rather disadvantageous to keep changes to existing core code private, because of high internal maintenance costs (and a very non-linear relation between the size of the private change, and the risk to have it broken quite badly by merging new code from the upstream community). Conversely, it is highly advantageous to add more extension points to the core code, and reduce the internal coupling, since that enables later, independent functionality (that corporations could use to differentiate themselves).

So then, it seems the differences for the ecosystem between weak copyleft vs. permissive, in the case at hand, are negligible – for the former, responsible behaviour is enforced by the license, for the latter, by technical reality. Beyond existing core code, everyone is free to not publish changes either way. Of course, an Apache-licensed OpenOffice.org would permit taking the project all-proprietary at any given point in time, but such a move is clearly not in the interest of the community, and specifically not in the interest of the Apache Foundation.

Of the remaining differences, the constraints on e.g. the timing of contributing back, are simply too minor to justify the overhead of running two communities in parallel. That’s the main reason I oppose the idea – as a software engineer, I try hard to avoid duplication for no good reason.

So this is about a long-term gripe I have, working (mostly) from a Linux desktop since well over 15 years now – and that is, getting email to work as “it’s supposed to be”. Which is a royal pain in the rear.

Let me elaborate. Before smtp server admins became really anal about spam and were blacklisting dial-up and random IP addresses, you simply used your default sendmail or postfix setup that came with your distro and everything was fine and dandy. Email clients just delegated sending to that MTA subsystem. After that good old time was over, you usually needed a smarthost to authenticate against – if you were lucky, it accepted arbitrary From-addresses, so you could even use it for both work and private mail, or share your box’s setup with your room mate.

When those last loopholes got closed, using a system-wide MTA for outgoing mail on a desktop machine quite apparently became a very, very bad idea (conceptually, and in terms of effort involved to make it work). I guess that was when everybody but die-hard Unixers switched over to all-in-one solutions like Thunderbird, Evolution, or KMail – which came with built-in MTA support. Still, there were corner cases – like mailing out patches from git or quilt, or dishing out signed gpg keys after the last keysigning party, that were, um, kinda hard to make work.

Not surprisingly therefore, the command line utilities used for that, initially nicely adhering to the Unix toolkit approach, soon grew MTA features like a hacker grows a beard. With varying levels of quality, and feature-completeness – and usually – the horror! – with the option (or the requirement), to store mail passwords in cleartext configuration files.

I personally plead guilty of tweaking/enhancing TLS support in caff (CA fire-and-forget – a nice script for signing and mailing gpg keys), because I was using Gmail for my private mail, which required that. Similarly, folks added TLS support to git-send-email, and surely tons of other not-primarily-MTA programs.

The bad news? Well, at least the two examples I gave don’t verify the server’s TLS cert at all, exposing you to trivial man-in-the-middle attacks. Or have you never used those programs in a hotel WLAN, or on a conference?

The next nice feature of a proper MTA, namely queueing mail if there’s temporarily no net, or the remote SMTP server is down, is even harder to achieve for those little tools – usually, people just queue meta-tasks then (like “TODO: re-run caff when I finally get out of this plane / send out patches to Jeff, he needs it by Monday”) – what a sucky state of affairs, in this day and age.

So, how to fix that? Well, don’t replicate MTA features all over the place – use a local, per-user MTA, plus a mail queue, and have all those tools, and your MUA, use that. After playing with sendEmail a bit (and even adding TLS cert fingerprint verification), I re-discovered msmtp, which does one thing extremely well – sending email. Plus, it has built-in support for Gnome Keyring and the Mac OS X Keychain, so you neither have to constantly type your passphrase on the terminal, nor store it plaintext. And it does TLS validation, and also (optionally) fingerprint checking. And it comes with a script to manage a local, per-user mail queue.

Well, I didn’t so much like that script, for all the wrong reasons – mostly, I probably was looking for excuses to tear it down into pieces and hack up my own solution – being two much shorter (and admittedly almost feature-less) variants thereof, enqueue.sh and runqueue.sh. The former just grabs all command line arguments, plus all of stdin, and stuffs it into a maildir-like queue – employing another little gem called safecat. The latter periodically processes all files in the queue, and then calling msmtp using the canned args and standard input. And sleeps when you’re offline.

Have now happily configured all of mutt, caff and git-send-email to just use enqueue.sh as their “sendmail” equivalent -

~/.gitconfig:

 [sendemail]
	envelopesender   = Thorsten 
	smtpserver       = /home/me/bin/enqueue.sh
	aliasesfile      = /home/me/.mutt/aliases
	aliasfiletype    = mutt

~/.caffrc:

 $CONFIG{'mailer-send'} = [ 'mailer', 'enqueue.sh -a gmail -f mail@for.me -- ' ];

~/.muttrc

 set sendmail="enqueue.sh -a gmail "

There were two rather minor bumps, first was a missing safecat for opensuse – I resuscitated an old spec file from cthiel, and added it to my opensuse buildservice repo. Debian of course has a package. Next was that opensuse’s msmtp did not have gnome keyring support enabled, presumably because it drags in some subset of the gnome stack as requirements. Naturally, on my desktop system, that’s an ignorable concern, so that’s now forked and added to my buildservice repo, too.

Thanks to private and corporate sponsors, and most of all because of so many enthusiastic volunteers, LibreOffice had a presence on this years FOSDEM and the CeBIT 2011 trade fair.

Green people manning the LibreOffice FOSDEM booth

Although both events are almost diametrally different, in both audience and recommended attire, there was a common trait – the interest and support we received for LibreOffice was immense. On FOSDEM, we had a table right between CACert and FSFE, in the entrance area of building H (one of the best places, I gather). Kendy and Bubli had brought boatloads of LibreOffice tshirts and jumpers from Prague, that we collectively lugged to the booth & almost fully sold over the weekend. Lots of people wearing green, suddenly.

hackers at the libreoffice booth

On Sunday, there was a LibreOffice devroom, among many others, with a talk from yours truly about Impress hacking.

This year, I was able to spend two days at CeBIT, joining the booth team of Jacqueline, Karl-Heinz, Roland, Thomas and Ulrich. The booth location was nicely located at a corner, with Linux New Media and Firefox presences vis-a-vis – and was packed with people most of the time (amazingly, Roland was able to recruit new German association members on the spot, and even talked them into helping out at the booth).

Crowded libreoffice booth, from http://blog.radiotux.de/2011/03/02/cebit-2011-tag-2-rote-huete-und-die-freie-schule/ (cc-by-nc-sa)

Met with a few press people, and tons of friends and colleagues from the new and the old project. Much encouraged to hear interest from larger and smaller sites to go & try LibreOffice (a handful of successful migrations have already happened), and from a few extra opensource companies planning to start hacking LibreOffice core code.

Managed to miss ICE train back to Hamburg in the evening, due to a friendly “train is 15 minutes late” vanishing literally seconds before the train arrived in the station (much earlier than 15 mins late).

Close-up picture of the LibreOffice CeBIT booth

The second day was maybe even more crowded, and from the talks I had, slightly more end-user-focused. People were generally very receptive to my “everyone can contribute something to LibreOffice” line. One case in point was a computer science prof, asking for a feature and not aware of the fact that he probably has more readily available talent with spare time at his disposal (i.e. students), than most of the LibreOffice project members. Besides that, got offers for more tinderboxes, and a few really high-quality bug reports.

At the end of the second day, the donation jar turns out to be quite full, and so the question comes up “how much is in there?”. Ensuing is a little bet – the closest estimate wins a cheese sandwich, specially-crafted by Jacqueline. Results not yet in.

Many thanks to all involved into making that a success – both events were wonderful debuts!

Prompted by Kohei’s nice howto for extracting part of a git repository’s history into a new repo, I attempted the same for our tinbuild script – but it seemed not really optimal for my case.

Instead, I mis-used git rebase --interactive, to transplant the relevant commits into a new and unrelated branch.

Created an entirely new repo, added some boilerplate, like a readme:

mkdir buildbot; cd buildbot
git init; cat ">useful info<" > README; git add README; git commit -m "added readme"

Add the (unrelated) libreoffice/bootstrap repo, so we can grab commits from there:

git remote add libo git://anongit.freedesktop.org/libreoffice/bootstrap
git fetch libo; git checkout -b libo libo/master

Find all the commits to our bin/tinbuild script, that we want to transplant:

git log --pretty="format:pick %h" --reverse  bin/tinbuild

Which already yields a suitable format for the interactive rebase, so here we go:

git checkout master
git rebase -i --onto HEAD master libo

The latter one takes all commits from the libo branch, that are not in master, and transplants them on top of HEAD – so you have to delete all suggested picks of course, and replace it with the list generated by the git log from above.

Resulting repo is here, only pushed the resulting master branch of course.

This now almost-past year was a true roller coaster ride for me (and many of my fellows). Not a particularly good excuse, but at least an excuse, for not blogging for such a long time.

The year started out with Oracle announcing the Sun acquisition has closed in January, and a virtual sigh of relief went through the OpenOffice.org community – as the months before had seen the usual information embargoes, indecisiveness and anxiety that tends to go with corporate mergers.

People had high hopes, that the new owner may be more amenable to change things fundamental to the governance of the OpenOffice.org project, and thus fix several issues brewing since a long time. Initial talks were encouraging, but it seems there was a cultural mismatch with the new owner, and the opensource communities – information was getting out even more sparsely than before, there was no sharing of feature plans, or release dates – something unthinkable for a project that can only thrive when you share code, and information, in the open. And the bad old habit of exclusionism and carefully maintained control lived on. See an earlier post for one of several cases where almost unequivocal community requests were opposed or ignored by Sun/Oracle.

Quite naturally, that was immensely frustrating for many long-standing community members, so over the course of this year, opposition grew – in several different sub-groups, that later joined forces during the annual OpenOffice.org conference in Budapest, and ultimately resulted in the launch of The Document Foundation, and LibreOffice project.

I’m delighted to be part of that new endeavour – though it means tons of work, and I see friends, colleagues and comrades spending days and nights on coding, infrastructure, QA, translation, advocacy and what not – it’s still a fun ride, because it feels right.

The only constant in life is change – that’s a given, none the least in software land. And change is what every project undergoes, like the StarOffice code becoming opensourced in 2000 after ten years of closed-source development, and now, after another ten years, that same code base finally getting a truly open governance, under the auspices of The Document Foundation. Because opening up the source code means going only half the way – as people wiser than me have repeatedly pointed out.

Ducunt volentem fata, nolentem trahunt – with that, I wish all my readers a very happy and successful new year, looking forward to meet many of you in person again in 2011! And thanks a million for the incredible work you folks did – I feel honoured indeed to be a part of this.

Last week was Hackweek here at Novell, and I had a shot at improving svg support at two places inside OOo (I somehow keep returning to that topic):

  • made up my mind earlier that any attempt to convert svg to OOo’s internal vector format is a waste of time
  • made up my mind earlier that any attempt at implementing an own svg renderer on top of OOo’s graphic subsystem is of no practical value and a duplication of existing functionality
  • made up my mind earlier that plugging librsvg is the way to go:
    • added librsvg 2.26.3 and libcroco 0.6.2 to OOo source tree (mostly for windows builds), made it buildable inside OOo’s build system
    • hacked up a drawing layer primitive to render svg to a bitmap, everytime zoom or output device changes
    • made OOo treat svg as a ‘native’ graphics format, i.e. no longer converting it to internal vector representations, but keeping the original svg file inside the odf package (that actually took the longest time, due to several internal bugs I hit)
    • the final patch for the change is here – not yet 100% production ready, but feature complete
    • down the road, would be nice to use cairo’s ps, and especially pdf export, when detecting a suitable export operation
    • below is a screenshot of some awesome openclipart samples (from the always-brilliant Chrisdesign), both rendering fidelity and render speed are lightyears ahead of the internal import I once did, that maps to OOo’s internal vector format
      collection of inserted svg cliparts
    • The upstream feature request for the above is this issue, in which, after I had implemented this, an Oracle engineer announced something apparently similar – which, after several deleted cws, and a question about what’s going on remaining basically unanswered, was kind of a nasty surprise. If my interpretation of the (very sparse) information is accurate, this must have been developed in stealth mode – something inherently incompatible with FLOSS, I guess.
  • switched OOo’s internal svg:d parser from an ad-hoc old implementation to a slighty better ad-hoc shared new implementation, that is able to interpret elliptical arc segments (a somewhat longstanding feature request). Patch for this change (needs to be hoisted to dev300 code line, which is ~trivial) is here. It seems the corresponding issue got closed a bit prematurely…

Managed to squeeze two days of Libre Graphics Meeting in my schedule this year again, and it was again exceeding expectations. The creative mix of artists, designers and hackers is unique and extraordinary, the talks, covering both work of art & software, are mostly a revelation (to me). The organizers around Femke, Nicolas & Anne managed to grab a lofty, industrial-style old piano factory for the venue, a perfect match for the event, and with excellent infrastructure. Kudos to them!

From the talks I attended, I think I was most impressed by the nodebox folks, one of the rare moments of interdisciplinary innovation (albeit with prior art) that makes you feel very humble. I’d actually hang my walls with stuff like that.

Some random impressions:

(Lukas Tvrdy with Krita hairy brush stroke example – Jakub Steiner on his 100%-FLOSS-based icon workflow – Peter & Franz of Scribus fame demoing fancy mesh gradients – Eric Schrijver’s great ranting on design & “pixels are vectors, too” – ReJon’s Inkscape talk via Inkscape – the CloutComputing panel)

Was in Gelsenkirchen again this year, for what seems to be the rock festival hitting the sweet spot between band relevance (i.e. not entirely arcane or unknown), and size (i.e. you don’t need to walk one hour from your tent to the stages). What should I say – it was a blast. Rain stopped on Thursday, and sun was shining the whole weekend. My favourites this time: Bloodbath, Accept, Kreator, Nevermore, Orphaned Lands, Rage + orchestra. Honorable mention: Mambo Kurt, for doing a really funny gig on a home organ + C64, in front of a very demanding audience.

Impressions:

(dawn in the amphitheater – Kreator’s Miland „Mille“ Petrozza – 3 behind the mixing desk – shower, festival-style – part of the venue – Orphaned Land from Israel, surprisingly good – stunts in mid-air – Rage + strings)

This is in response to Martin’s posting about OOo product development, and my candidature for the OOo Community Council in particular:

The only candidate now for the non-code contributing projects for the next round of council elections will be Thorsten Behrens. he’s a well known great supporter of the hacker driven “Product Development”, from my perspective a good representative of the code contributors. But not for the non-code contributing PD projects of OOo as the charter of the CC states. It’s difficult to do a “no” vote against the only candidate for this seat, especially if the candidate does good things for the project and I consider him as a good friend of mine. But we need a general review of the PD part of the project, and therefore I want to see a person representing the classical school of product development and call for a no-vote and call for new candidates.

I wonder, does anyone really think someone capable of working on the strategic marketing plan will have more time doing so when being a member of the CC? ;-)

More seriously, and as I wrote in my introduction mail, I firmly believe that CC’s central function is arbitration – i.e. talking to people, convincing folks, finding compromise. It’s decidedly not the place to vote people into, because you need specific jobs A, B, or C done – that’s what the different projects are for, for the example at hand the marketing project. My selling point is surely not decades of marketing experience, but rather my ties into the wider community, for which I know very many people in person, and would call quite a few of them friends.

I’ve done QA work on CWS & sponsoring a tinderbox, I know a fair bit about the economies & strategies in FLOSS communities – and I do my legwork in advertising OOo, e.g. at CeBIT. As stated in my introduction mail, I’m explicitely running for this seat representing projects outside of raw code contribution in the council – in fact, I’ve always frowned upon the notion of being purely “code contributor”, “qa engineer”, or “marketer” – core to my motivation is my love for this project, that is OOo, and everything that’s necessary to further its success. Across all camps.

And finally, I find the act of lobbying for a “no” vote against a CC candidate quite without precedent, even more so since there was not even a single question, neither public nor privately, about my intentions or motivations, let alone a discussion. I can only ask everyone involved to check the facts objectively, and keep up with the tradition of having the CC be a place of collaboration & compromise, instead of exclusionism & camp mentality.

Note: I’d prefer to have the actual discussion on the dev list, rather than via blog. Please follow up there.

Follow

Get every new post delivered to your Inbox.