Skip navigation

Category Archives:

With Oracle donating trademark and code to the Apache Foundation, one point frequently made is the one about licensing differences. LibreOffice is under a weak copyleft license, that is, changes to existing core need to be made public (at the time a product ships). In contrast, to-be-Apache OpenOffice would be available under a non-copyleft license, meaning nobody is required to contribute anything back.

It is said that non-copyleft, or permissive, licenses are more popular with corporations, because they allow for much more flexibility in what, and when, to contribute back. Overall, it is conjectured, the projects will still see enough open contributions from corporate participants, because private forks are not cost-effective.

Let’s now have a look at how all that applies to There are a few things to know beforehand. First, the code represents almost 20 years of development, and is, in many places, a sedimentation of bugfixes over bugfixes. Which overall results in highly coupled and fragile code. Secondly, OOo has a mature component framework, API and extension mechanism, that makes it easy for third parties to innovate on top of the existing core.

Given that, it is rather disadvantageous to keep changes to existing core code private, because of high internal maintenance costs (and a very non-linear relation between the size of the private change, and the risk to have it broken quite badly by merging new code from the upstream community). Conversely, it is highly advantageous to add more extension points to the core code, and reduce the internal coupling, since that enables later, independent functionality (that corporations could use to differentiate themselves).

So then, it seems the differences for the ecosystem between weak copyleft vs. permissive, in the case at hand, are negligible – for the former, responsible behaviour is enforced by the license, for the latter, by technical reality. Beyond existing core code, everyone is free to not publish changes either way. Of course, an Apache-licensed would permit taking the project all-proprietary at any given point in time, but such a move is clearly not in the interest of the community, and specifically not in the interest of the Apache Foundation.

Of the remaining differences, the constraints on e.g. the timing of contributing back, are simply too minor to justify the overhead of running two communities in parallel. That’s the main reason I oppose the idea – as a software engineer, I try hard to avoid duplication for no good reason.


This now almost-past year was a true roller coaster ride for me (and many of my fellows). Not a particularly good excuse, but at least an excuse, for not blogging for such a long time.

The year started out with Oracle announcing the Sun acquisition has closed in January, and a virtual sigh of relief went through the community – as the months before had seen the usual information embargoes, indecisiveness and anxiety that tends to go with corporate mergers.

People had high hopes, that the new owner may be more amenable to change things fundamental to the governance of the project, and thus fix several issues brewing since a long time. Initial talks were encouraging, but it seems there was a cultural mismatch with the new owner, and the opensource communities – information was getting out even more sparsely than before, there was no sharing of feature plans, or release dates – something unthinkable for a project that can only thrive when you share code, and information, in the open. And the bad old habit of exclusionism and carefully maintained control lived on. See an earlier post for one of several cases where almost unequivocal community requests were opposed or ignored by Sun/Oracle.

Quite naturally, that was immensely frustrating for many long-standing community members, so over the course of this year, opposition grew – in several different sub-groups, that later joined forces during the annual conference in Budapest, and ultimately resulted in the launch of The Document Foundation, and LibreOffice project.

I’m delighted to be part of that new endeavour – though it means tons of work, and I see friends, colleagues and comrades spending days and nights on coding, infrastructure, QA, translation, advocacy and what not – it’s still a fun ride, because it feels right.

The only constant in life is change – that’s a given, none the least in software land. And change is what every project undergoes, like the StarOffice code becoming opensourced in 2000 after ten years of closed-source development, and now, after another ten years, that same code base finally getting a truly open governance, under the auspices of The Document Foundation. Because opening up the source code means going only half the way – as people wiser than me have repeatedly pointed out.

Ducunt volentem fata, nolentem trahunt – with that, I wish all my readers a very happy and successful new year, looking forward to meet many of you in person again in 2011! And thanks a million for the incredible work you folks did – I feel honoured indeed to be a part of this.

Last week was Hackweek here at Novell, and I had a shot at improving svg support at two places inside OOo (I somehow keep returning to that topic):

  • made up my mind earlier that any attempt to convert svg to OOo’s internal vector format is a waste of time
  • made up my mind earlier that any attempt at implementing an own svg renderer on top of OOo’s graphic subsystem is of no practical value and a duplication of existing functionality
  • made up my mind earlier that plugging librsvg is the way to go:
    • added librsvg 2.26.3 and libcroco 0.6.2 to OOo source tree (mostly for windows builds), made it buildable inside OOo’s build system
    • hacked up a drawing layer primitive to render svg to a bitmap, everytime zoom or output device changes
    • made OOo treat svg as a ‘native’ graphics format, i.e. no longer converting it to internal vector representations, but keeping the original svg file inside the odf package (that actually took the longest time, due to several internal bugs I hit)
    • the final patch for the change is here – not yet 100% production ready, but feature complete
    • down the road, would be nice to use cairo’s ps, and especially pdf export, when detecting a suitable export operation
    • below is a screenshot of some awesome openclipart samples (from the always-brilliant Chrisdesign), both rendering fidelity and render speed are lightyears ahead of the internal import I once did, that maps to OOo’s internal vector format
      collection of inserted svg cliparts
    • The upstream feature request for the above is this issue, in which, after I had implemented this, an Oracle engineer announced something apparently similar – which, after several deleted cws, and a question about what’s going on remaining basically unanswered, was kind of a nasty surprise. If my interpretation of the (very sparse) information is accurate, this must have been developed in stealth mode – something inherently incompatible with FLOSS, I guess.
  • switched OOo’s internal svg:d parser from an ad-hoc old implementation to a slighty better ad-hoc shared new implementation, that is able to interpret elliptical arc segments (a somewhat longstanding feature request). Patch for this change (needs to be hoisted to dev300 code line, which is ~trivial) is here. It seems the corresponding issue got closed a bit prematurely…

Managed to squeeze two days of Libre Graphics Meeting in my schedule this year again, and it was again exceeding expectations. The creative mix of artists, designers and hackers is unique and extraordinary, the talks, covering both work of art & software, are mostly a revelation (to me). The organizers around Femke, Nicolas & Anne managed to grab a lofty, industrial-style old piano factory for the venue, a perfect match for the event, and with excellent infrastructure. Kudos to them!

From the talks I attended, I think I was most impressed by the nodebox folks, one of the rare moments of interdisciplinary innovation (albeit with prior art) that makes you feel very humble. I’d actually hang my walls with stuff like that.

Some random impressions:

(Lukas Tvrdy with Krita hairy brush stroke example – Jakub Steiner on his 100%-FLOSS-based icon workflow – Peter & Franz of Scribus fame demoing fancy mesh gradients – Eric Schrijver’s great ranting on design & “pixels are vectors, too” – ReJon’s Inkscape talk via Inkscape – the CloutComputing panel)

This is in response to Martin’s posting about OOo product development, and my candidature for the OOo Community Council in particular:

The only candidate now for the non-code contributing projects for the next round of council elections will be Thorsten Behrens. he’s a well known great supporter of the hacker driven “Product Development”, from my perspective a good representative of the code contributors. But not for the non-code contributing PD projects of OOo as the charter of the CC states. It’s difficult to do a “no” vote against the only candidate for this seat, especially if the candidate does good things for the project and I consider him as a good friend of mine. But we need a general review of the PD part of the project, and therefore I want to see a person representing the classical school of product development and call for a no-vote and call for new candidates.

I wonder, does anyone really think someone capable of working on the strategic marketing plan will have more time doing so when being a member of the CC? 😉

More seriously, and as I wrote in my introduction mail, I firmly believe that CC’s central function is arbitration – i.e. talking to people, convincing folks, finding compromise. It’s decidedly not the place to vote people into, because you need specific jobs A, B, or C done – that’s what the different projects are for, for the example at hand the marketing project. My selling point is surely not decades of marketing experience, but rather my ties into the wider community, for which I know very many people in person, and would call quite a few of them friends.

I’ve done QA work on CWS & sponsoring a tinderbox, I know a fair bit about the economies & strategies in FLOSS communities – and I do my legwork in advertising OOo, e.g. at CeBIT. As stated in my introduction mail, I’m explicitely running for this seat representing projects outside of raw code contribution in the council – in fact, I’ve always frowned upon the notion of being purely “code contributor”, “qa engineer”, or “marketer” – core to my motivation is my love for this project, that is OOo, and everything that’s necessary to further its success. Across all camps.

And finally, I find the act of lobbying for a “no” vote against a CC candidate quite without precedent, even more so since there was not even a single question, neither public nor privately, about my intentions or motivations, let alone a discussion. I can only ask everyone involved to check the facts objectively, and keep up with the tradition of having the CC be a place of collaboration & compromise, instead of exclusionism & camp mentality.

Note: I’d prefer to have the actual discussion on the dev list, rather than via blog. Please follow up there.


Beside the problem of general polygon clipping, which has been thoroughly researched in the past twenty or so years, it is sometimes the case that only rectangular areas have to be clipped against each other. Two very prominent examples are the calculation of redraw areas in a GUI application – widget and application content areas are, because of performance and simplicity, often represented as rectangles, and quick, approximative spatial indexing, e.g. for GIS data.

It turns out that constraining clip calculations to axis-aligned bounding boxes (aka AABB) allows for noticeable simplifications in the algorithm – regarding code and time complexity, as well as numerical stability.

The algorithm

For clipping a set of AABBs against each other, the common sweep line algorithm is employed, in this case sweeping a vertical line from the leftmost box over to the rightmost. Therefore, each box B_i contributes two sweep line events E_l_i and E_r_i to the algorithm, one for its left and one for its right edge. After sorting those events in ascending order, a line is swept horizontally across all boxes:

Each time a box’s left edge is hit by the sweep line, two horizontal edges H_u_i and H_l_i are inserted into a list of currently-active horizontal segments. This list is kept sorted with ascending y values, and every vertical edge event is checked against intersection with all active horizontal edge segments.

At the core of every polygon clipping algorithm, mutual edge intersections need to be computed. Fortunately, as rectangles are convex polygons and free of self-intersections, a lot of the more complicated preprocessing steps involved for generic polygon clipping can be avoided. One of the most notable aspects is the fact that this algorithm is numericable stable also when performing intersection calculations with finite-precision floating-point math, because no precision-reducing operations need to be performed (this is in contrast to general polygon clipping, which gets notoriously instable under floating-point math, since the (sometimes repeated) calculation of intersection points introduces round-off errors – obvious for oblique edges).

For intersection calculations of AABBs, no calculations whatsoever are necessary on the coordinate elements, the resulting intersection vertices are just element-wise merged input coordinates.

Basically, four cases of edge intersections can be distinguished:

Note that, since the sweep line is vertical, it is always a right or a left edge of a rectangle, that needs to be intersected with either an upper or a lower edge. Degenerate cases, such as rectangles with zero height or width, or two exactly identical rectangles, are handled in a defined, but consistent way without affecting the general algorithm.

For each sweep line event (being either a left or a right rectangle edge), all currently active horizontal edge segments are processed, starting with the one with the smallest y value. Each left edge sweep line event creates a polygon, into which intersecting horizontal edges are merged. Therefore, when processing the horizontal edge segments, the sweep line will carry a current polygon P_c. The sweep line’s current polygon may change, naturally, as it intersects horizontal edges, taking up the associated polygon of the intersecting edge and in turn passing the current polygon to that horizontal edge:

For the sake of clarity, I have omitted symmetric cases here, which you get when polygon orientation is reversed (denoting reversed inside and outside areas). Those follow rather straight-forward. A version of the algorithm that handles those cases, and is thus able to perform the usual boolean operations on AABB-defined polygons, is implemented in c++, and used in’s graphic subsystem. A standalone version can be found here.

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

I’m actually even presenting!

Just uploaded ooo-build OOo 3.2 rc2 mac builds here.

Just a brief notice, took caff+tls, pulled changes up to latest debian version from svn, and hacked it into working state: script is here, suitable .caffrc is there.

Spent an intense last week in Orvieto, Italy. First two days had the 2nd odf plugfest; glad to see so many enthusiastic people from the odf universe again, or for the first time in person – and of course witnessing big corporation representatives like Doug and Rob sitting on the table, striving for better odf interop.


Following that had three days of conference, with many interesting talks (including those from my excellent colleagues Petr, Noel, Cedric, Fridrich, Kendy, Rodo and Kohei).
I found the outlined ideas around an "odfkit" (the similarity in name to webkit is not by accident) quite remarkable – if that ends up in more code reuse across the place (and ideally also reusing existing code in their implementation), this is Good ™.


Also great to meet the ever-energetic Chris Noack again; it was mostly due to him that I attended a few UX-related talks – and had at least the occasional feeling that approaches there were a bit by-the-book, maybe leaving a few peculiarities and potential synergies available in software generally and FLOSS specifically out of the equation. Reportedly, there are also students working on UX topics, so it would be really awesome to see them join the education project – in an attempt to tear down the wall between coders and interaction designers, that at least I perceive is existing in OOo.