PostgreSQL/PostGIS Administratoren Kurs 17.05./18.05.2018 in Zürich

Der Kurs richtet sich an PostgreSQL Benutzer, die ihre Administratoren-Kenntisse erweitern wollen. Es werden verschiedene Ansätze erläutert, um den Einsatz ihrer Datenbanken zu optimieren, und anhand von verschiedenen Beispielen geübt.

Am Ende des Kurses sind die Teilnehmer in der Lage, eigene Datenbank-Server, Datenbanken sowie deren User zu erstellen, verwalten und zu tunen. Zudem sind sie fähig, sich die dafür benötigten Informationen selbst zu beschaffen und kennen die wichtigsten Plattformen und Dokumentationen.

Die folgenden Themen werden behandelt:

  • Einführung in PostgreSQL
  • Serverkonfiguration (unter linux)
  • Administration Datenbank
  • Administration Dabenbanknutzer
  • Rechte und Sicherheit
  • Maintenance
  • Monitoring
  • Analyse Performance (Tipps and Tricks)
  • Indexen

Der zweitägige Kurs (9:00 – 17:00) kostet 890 CHF pro Person. Die Kurskosten beinhalten die professionelle Unterstützung während des Kurses, das eBook PostgreSQL Administration Cookbook, 9.5/9.6 Edition, das eBook PostgreSQL 9.6 High Performance, sowie die zwei Mittagessen.

Ein Kursleiter (Marco Bernasocchi) für 4 bis 5 Teilnehmer, 2 Kursleiter für 6 bis 10 Teilnehmer.

Posted in Courses, PostgreSQL

Marco becomes QGIS.org Co-chair

We are very proud to announce that one of our founders and directors Marco Bernasocchi was elected as QGIS.org project steering committee (PSC) co-chair.

With over 10 years of involvement with QGIS (he started working with QGIS 0.6) Marco will serve for the next two years as one of the board members of the QGIS.org association. He is excited to get the chance to work together with the PSC and the fantastic QGIS community to push QGIS even further.

We wish him and the rest of the elected PSC two very successful years full of QGIS awesomeness.

Rock on QGIS!

read more at QGIS Annual General Meeting – 2018

Posted in Featured, Non-commercial, QGIS

Porting QGIS plugins to API v3 – Strategy and tools

The Release of QGIS 3.0 was a great success and with the first LTR (3.4) scheduled for release this fall, it is now the perfect time to port your plugins to the new API.

QGIS 3.0 is the first major release since September 2013 when QGIS 2.0 was released. During the release cycles of all 2.x releases, the QGIS Python API remained stable. This means that a plugin or script meant to be used in QGIS 2.0 is still working in QGIS 2.18.

The need for a new major release was principally motivated by the update to newer core libraries such as Qt 5 and Python 3. But it also offered a unique opportunity to the development team to tackle long-standing issues and limitations which could not be fixed during the 2.x life cycle. Inevitably, this introduced multiple backward incompatibilities making scripts and plugins unusable in QGIS 3.

In this post, I’d like to share some notes from my latest ports. Obviously, if you need professional help for porting your plugins, don’t hesitate to contact us.

Step 0 – Unit tests

You should already have your code covered by unit tests, but I know, the world is not perfect and at times we have to cut edges and, unfortunately, often unit tests are the ones getting cut.
Porting to a new API version is a great moment to go write unit tests helping to make sure that your plugin will keep on working as before the port.

Step 1 – fix * imports

Before going on, please go and remove all your * imports (like from PyQt4.QtGui import *). They are bad and qgis2to3 cannot handle them. There is no need to already change them to the PyQ5 version, just remove them and add the propper PyQt4 imports. We’ll handle moving to PyQt5 in a later step.

From PEP8: Wildcard imports (from import *) should be avoided, as they make it unclear which names are present in the namespace, confusing both readers and many automated tools.

Step 2 – Versioning strategy

Since having a source code repository is a mandatory requirement for publishing a plugin on plugins.qgis.org, I assume you already know what code versioning is and why you absolutely should be using it.

APIv2 branch

Unless you absolutely want to make your code run on both API 2 and 3 (which might be possible) I strongly suggest to create a branch or your current version called qgis2, API2 or legacy or whatever you want to call it. From now on this branch will be responsible for all your future (probably mainly bugfixes) releases for the 2.x series of QGIS. Remember to edit the metadata.txt file and add your minimum and maximum version (not mandatory but nice for clarity):

Master branch

From now on your master branch will be where all your future development for the 3.x series will happen. Remember to edit the metadata.txt file and add your minimum version:

Step 3 – install the helpers

We created a repository with two dedicated tools to help you migrate your QGIS 2 plugins to QGIS 3: qgis2to3 and qgis2apifinder. Both tools are distributed as a single Python package installable via

Please note that often for system-wide installation you need sudo.

All the sources and more information can be found at https://github.com/opengisch/qgis_2to3

Step 4 – Python 2 to Python 3 and PyQt4 to PyQt5

The qgis2to3 tool is a copy of the files found in QGIS scripts to allow for quick downloading and simple installation without the need of downloading the whole QGIS repository. This is a set of fixers for the python 2to3 command that will update your Python 2 code to Python 3. The additional fixers will also take care of the PyQt4 to PyQt5 porting as well as some other things.

Running the qgis2to3 command will show a number of changes required. These changes can be applied with -w flag

Step 5 – Check for API v2 usages

The qgisapi2finder tool helps you find usages of the QGIS API version 2 and gives hints about potential required changes for API version 3.

It is based on a machine parsing of https://qgis.org/api/api_break.html so the results are as good as the information there.
Also, being a simple text parser, it just gives a hint where to look at. It is by no means a complete tool to find all the possible API incompatibility.

Methods are matched using only their names and not their classes, so there might be various false positives. Also, if the same keyword has been edited in various classes, qgisapi2finder will show you all the available suggestions for that keyword.

You can run qgis2apifinder to get hints on the existence of obsolete code requiring manual porting and suggestions on how to actually deal with it. Please note that qgis2apifinder does hide some very frequent words like [‘layout’, ‘layer’, ‘fields’] from the analysis. You can show those with the --all flag.

Step 6 – update your code

From here on it is all looking at each hint, updating the code and rerunning your tests. A properly configured IDE (stay tuned) could also help in the process.

Some more information can be found at github.com/qgis/QGIS/wiki/Plugin-migration-to-QGIS-3

Also, take a look at the PyQGIS API documentation now online at python.qgis.org/master.

I hope this post and tool can help you porting your plugins to QGIS3 and again if you need professional help for porting your plugins, don’t hesitate to contact us.

Posted in Featured, Non-commercial, PyQt, Python, QGIS

PostgreSQL back end solution for quality assurance and data archive

Did you know that the possibilities to make a full QGIS back end solution for quality assurance and archiving in PostgreSQL are immense? SQL has it’s well known limitations, but with a little bit creativity you can make quite nice solutions just using triggers and rules. In this post I’ll explain what we did lately based on a project with a customer. He needed to assure the consistency of data but still give his employees the possibility of a fast feeding of the data collected on the field to the database. Another request was to keep every status of the data with the information about the changes (archiving).

It’s always the question, where to put the logical part of the solution. QGIS is quite powerful with constraints, but the undeniable advantage of a back end solution is, that you can use any front end – no matter what configuration you have on QGIS or what Feature Manipulation Engine (FME) you use – without influencing the guarantee of data validity.

Situation

It’s all about trees

At least for that customer we got lately. The customer owns pieces of land all over Switzerland. On this pieces are forests and in the forests are – as expected – trees. Well, mostly – if you are not a bark beetle or a squirrel – you don’t care about a single tree. Except if there is something special with it. For example, a branch that could fell down on your brand new Citroën DS or if the tree has a disease that could kill the whole forest, that is actually needed to convert the carbon dioxide (from your DS) into oxygen.


The issuetrees (yellow) lie on the forest (green) – and the forest lies on the land piece (brown).


And the (Entity Relationship Model) ERM looks like this. A land can have zero, one or more forests – and a forest can have zero, one or more trees with issues.

It’s not really about trees

The situation is, that a lot of field workers (so called tree-inspectors) work with our mobile solution QField, where they can collect the data while standing in the middle of a wild forest with one foot in a rabbit hole and the other one in the stinging nettle. It’s quite possible and usual that there can be some problems entering all the data correctly. Typing issues on the tablet while running away from wolves or just lack of concentration because of the beauty of the swiss forests.

And it’s about lots of front ends

But there are not only the tree-inspectors. There are the office-clerks working with QGIS and planning, when the problems on the tree has to be solved. And finally there are the woodsmen solving the issues and setting the status to done on QField again. So there have to be a lot of projects using the same data but with different configurations. If you make all the quality assurance on the front end you won’t have time to care about the trees anymore and beside of that it’s fault-prone.

Quality assurance in the back end

Data integrity with constraint functions

There are simple constraints like that a field is not empty and more complex constraints with a lot of logic regarding the content of the field.

Simple constraints

Lots of data integrity issues can be solved by using simple constraints like NOT NULL (column must not assume null), UNIQUE (column must be unique among all the rows in table) or Primary Key and Foreign Keys constraints.

Checks and constraint functions

For more special cases or not really technical constraints, we can use checks. Here for example: If the issue is done, then it needs to have a donedate. But not if done is not TRUE (NULL or FALSE).

And if these cases are more complex and not technical at all, we can put it to a function and use the return value (for example the error message) as condition. In the following example we want to assure that assignee is the name of one of the employed woodsmen. Of course it can be NULL too.

And the function live.chk_assignee_valid:

So with many of these constraints, we can assure a lot and the data are fully correct. But this is not always comfortable to use. Why? Go on reading…

Using of a “data quarantine”

Let’s imagine that the tree-inspector collected all day data in QField. Standing in the middle of the mentioned stinging nettle and rabbit holes, running from wolves etc… Of course he made some mistakes while collecting data. In the evening he returns tired to the office, already thinking about the dinner meal his wife is cooking (or his husband, of course), and wants to upload the data from the QField project to the database. And what happens? Lot’s of error messages. He thinks about to solve them tomorrow, because his wife (or his husband) can get quite angry when he is late for dinner. But if he does it tomorrow, the data are only stored on the device and nowhere else overnight. He need to have them in the database. No matter, if correct or not. And this leads to the idea of the “data quarantine”.

Use Case

All data entered to the database (valid or not) need to be stored. The entries accepted from the so called live tables with all constraints, are stored normally. The entries failed because of the constraint, are stored in another table. In the so called quarantine table. So you have for every live table another quarantine table. This means, we need another table structure existing parallel to the live tables. We do it in two schemas: The live schema and the quarantine schema.

So the tree-inspector synchronizes his QField without any problem to the database. The correct entries are written into the live tables. The incorrect into the quarantine. Actually all the data are coming into the quarantine and there is a Trigger passing them through to the live table. If they success, they will be stored in live and removed from quarantine. Otherwise they keeps staying in the quarantine. Same situation when the quarantine-clerk later corrects the data entries in the quarantine. On an update they are pushed into the live-table. If success, all good. Otherwise the entry keeps staying in the quarantine.

Structure

And how we do that?

It’s all solved by using triggers. SQL triggers are procedural code that are automatically executed on an action on a table or view. For this solution we actually need two trigger per quarantine table. After insert into or update quarantine table, a trigger should be fired for every entry, doing this:

Insert the same entry into the live table. If success, then delete the entry in the quarantine table. Else write the info to the current entry in the quarantine table.

Probably you noticed the problem with the recursion, but let’s not think about it at the moment 🙂

Code

In PostgreSQL we can use trigger functions. Means you have the triggers on the table calling the functions.

Trigger on table quarantine.issuetree after update

Trigger function (simplified)

Trigger function used for the solution when inserting into live

And this is the function with the logical part with success and failing.

As you can see, we use here an id called quarantine_serial. We can not use the primary key in the quarantine, because here everything is accepted and so nothing of the entered data (not even issuetree_id) has to be be unique. But to identify the entry in the quarantine table we create the serial quarantine_serial.

Trigger function used for the solution when inserting into or update live

Actually the trigger function before is not usable. Because it works only to insert new data into the live system. Now we remember the use case. The trigger here in the quarantine does not know if the tree-inspector created a new issuetree or updated an old one. On synchronization he made an INSERT INTO to the quarantine with all entries. But these could be new entries (new trees) or already existing ones in the live table. So the trigger function has to decide, if it’s an insert or an update on the live table.

Recursion problem

The problem with the recursion is that we have a trigger after update of table issuetree in quarantine. This trigger calls the function, and the function (in case of fail updating live) updates the quarantine.issuetree with the error-message. So there is another update and the trigger is fired again, and again, and again… ♪Across the universe♬

We could solve the problem by checking the depth of triggers in PostgreSQL:

And it looks like this

The yellow points are the issue trees in the live. If we create another one and have a mistake in it (GPS Id wrong), then it’s stored in the quarantine (pink). When we correct the data it’s written over the quarantine trigger into live. If succeeded, the point changes the color to yellow.

Actually the yellow point appears (live) and the pink point(quarantine) disappears, because the entry is inserted into live and deleted in quarantine.

Archiving all data

There are different reasons why you need to archive data. Maybe somewhen you want to show your grandchildren, how much forest we still had today before the sky got dark. But this was not the reason for the mentioned customer, but legal reasons:
When the woodsman cuts the last bamboo tree of the forest and this was the only food for the very last living panda bear of Switzerland, we need to know who created or changed this entry in the database and what tree should have been chopped down instead.

Third schema “archive”

So we created a third schema parallel to live and quarantine. The archive schema. This means every table in live does not only have a quarantine table accordingly, but also an archive table too where all the old status of entries including the timestamp, when it has been archived.

Of course not only the changed live data are stored in the archive, but also every changed data from quarantine.

Use Case 1

The tree-inspector enters an entry of an issue tree that already existed in the live table to the quarantine (1). The after insert trigger is fired and it tries to write to the live table. And with success. The entry is written to the live table (2). This means, before the entry in live is updated, the old one was copied to the archive table (3). Then in the same transaction the entry in the quarantine is deleted (1). Means the old status is copied to the archive too (4).

So there will be the updated entry in the live-table (2), no entry in the quarantine-table (1) and two entries (3 and 4) in the archive table.

Use Case 2

The tree-inspector enters an entry of an issue tree that already existed in the live table to the quarantine (1). The after insert trigger is fired and it tries to write to the live table. And it fails. The entry in the quarantine will be updated with the error-message (2). The old status is copied to archive (1). The office clerk makes no the changes to this entries. The trigger is fired and this time it could write into the live-table with success (3). So the old entry is copied to the archive (4) and after deleting the entry in the quarantine, there will be the second old status of quarantine (5) in archive too.

So there will be the updated entry in the live-table (3), no entry in the quarantine-table (1 and 2) and three entries (1, 4 and 5) in the archive table.

Structure

And how we do that?

It’s solved by using triggers too. We actually need only one trigger per table, but not only in quarantine, but also in live. It has to be fired before every update of every entry, doing this:

Insert a copy of the current entry into the archive table with the status it had until the update we are doing right now.

Code

It’s the same code for the live and the quarantine table triggers. So only the ones for the quarantine are explained.

Trigger on table quarantine.issuetree before update

Trigger Function (simplified)

And the archive-tables have a default time-column to store the time, when the entry has been archived:

That’s it

That’s what I just needed to tell you. It was a very interesting project and I liked working on it.

Thanks for reading so far. If you have questions, improvement suggestions or anything else to tell me, then please comment it.

See yah! 🙂

Posted in Non-commercial, PostgreSQL, QField, QGIS, Scripts

Interlis translation

Lately, I have been confronted with the need of translating Interlis files (from French to German) to use queries originally developed for German data. I decided to create an automated convertor for Interlis (version 1) Transfer Format files (.ITF) based on the existing cadastral data model from the Swiss confederation (DM01AVCH).

The ILI model file conversion has been achieved manually once. This was quite simple since the used model is an extension with little to no difference with respect to the confederation model which already exists in several languages.

Next was to automate the conversion of the ITF files.

A program developed by Swisstopo called DM01AVCH_Translator existed to translate confederation model’s ITF files. Originally developed in 2008, the solution is sadly no longer maintained by Swisstopo and was available on Windows only. Moreover it can’t be completely automated since some interaction is required in the GUI and some tweaks in the output file are needed.

So I decided to develop a dedicated and fully automated solution which I’d like to share since it is easily adaptable to new scenarios and hopefully can avoid troubles to those who are playing with Interlis files!

You can find this utility, written in Python, called ITF_Translator on https://github.com/opengisch/ITF_Translator

ITF_Translator

ITF_Translator is capable of translating Interlis v1 transfer files (ITF) to another language thanks to a dictionary text file. Currently restricted to German, French and Italian, it is a simple operation to add support for other languages.

ITFTranslator class from itf_translator_generic module creates a translator object based on a custom dictionary file whereas some custom translations rules can be added.

Two extensions of ITFTranslator exist already and contain everything needed to translate DM01AVCH  (cadastral data model from the Swiss confederation) and MD01MOVD  (cadastral data model from Canton Vaud). These classes are ITFTranslatorDM01AVCH respectively ITFTranslatorMD01MOVD.

Dictionary file

The dictionary file is a text file composed of line formatted as follows:

german_translation;french_tranlsation;italian_translation

with the following rules:

  • line beginning with ‘#’ and blank lines are ignored
  • no spaces are allowed, use underscores ‘_’ instead

Lines are read from the top to the bottom. If a translation key is repeated, the last one will be used.

The existing dictionaries for ITFTranslatorDM01AVCH and ITFTranslatorMD01MOVD are based on the dictionary from Swisstopo’s tool.

Usage example

To translate the file input.itf  based on the DM01AVCH  model from French to German:

A file named output.itf is created and contains the translation.

Rules

The ITFTranslatorDM01AVCH and ITFTranslatorMD01MOVD extend ITFTranslator class and implement required additional rules to correctly translate the respective ITF files. These rules exist to handle non reversible translations. For instance in the DM01AVCH model, “element_lineaire” in French can be translated in German to either “linienelement” or “linienobjekt” depending on the topic. Hereby, we have the opportunity to easily add some context dependant rules which could handle any specific use-case.

Looking at the code ITFTranslatorDM01AVCH demonstrates how easy it is to create translators for other models. Rules are objects of the class SpecialCaseRule

The goal of these rules is to define the translation of a table within a precise topic. Dictionary based only translations indistinctively treat every occurrence of the words in the source file. The proposed approach is convenient because it combines simple dictionary files which are valid in most cases, and rules to handle specific scenarios.

An example of a rule defined for ITFTranslatorDM01AVCH is:

It solves the example cited previously, specifying that the translation from French to German of the table “Element_lineaire” in the topic “Bords_de_plan” is “Linienobjekt” while the dictionary file says the translation of “Element_lineaire” is “Linienelemen” for any other case.

Tagged with: ,
Posted in Interlis, Non-commercial, Python

Cours PyQGIS 13.11./14.11.2017 à Neuchâtel

Le cours est complet.

Le cours est destiné aux utilisateurs avancés de QGIS qui souhaitent accroître leurs possibilités grâce à l’utilisation de python dans QGIS. Lors de cette formation, nous aborderons différentes possibilités d’interaction avec l’API QGIS ainsi que la création d’interfaces graphiques simples avec PyQt.

Les thèmes suivants seront traités:

  • Utilisation de la console python dans QGIS
  • Interaction avec l’utilisateur au travers de boutons et autres outils graphiques
  • Introduction sur l’infrastructure des plugins
  • Création d’un algorithme de type “processing” dans un plugin
  • Creation de dialogues avec QtDesigner
  • Utilisation de PyCharm comme IDE
  • Le cours reposera sur du code destiné à QGIS 2. Dans la mesure du possible, du code compatible avec QGIS 3 sera utilisé et les limites de compatibilité seront explicitées.

Le cours dure 2 jours (9:00 – 17:00) et coûte 890 CHF par personne. Ce prix comprend l’inscription, le support de formation, le livre QGIS Python Programming Cookbook – Second Edition by Joel Lawhead (2017) en version papier et ebook ainsi que les deux repas.

Un intervenant (Denis Rouzaud) pour 4 à 5 personnes, et 2 intervenants pour 6 à 10 personnes.

Le cours est complet.

Posted in Courses, PyQGIS, Python, QGIS, QGIS Plugins

Best practices for writing Python QGIS Expression Functions

Recently there have been some questions and discussions about python based expression functions and how parameters like usesGeometry  need to be used. So I thought I’d quickly write down how this works.

There is some intelligence

If the geometry or a column is passed in as a parameter you do not need to request it manually, you can even specify explicitly that you do not require the geometry or a column here.

We can still call it within an expression by writing

The expression engine will do the appropriate thing and request the geometry and attributes automatically.

Hardcoded parameters

We can also write the function the following way. The difference is, that we will only ever be able to use it for this layer because it’s not portable. But sometimes there might be a good reason for doing that.

Notice that the geometry and columns were mentioned in two places. The decorator ( usesGeometry=True  and referencedColumns=['impact_radius']) as well as within the function body with feature.geometry()  and feature['impact_radius'] .

Also notice that it was checked if the feature actually does have a geometry. with if feature.geometry() . It’s a common pitfall, that sometimes features with a NULL geometry suddenly make expression functions fail. It’s very easy to oversee this in development and then hard to track down in a production environment. Better stay on the safe side.

When you call this from an expression, you will call it the following way.

Require all attributes

Sometimes it’s required to actually make sure that you have all attributes available. In this case you can specify  [QgsFeatureRequest.ALL_ATTRIBUTES].

The following expression generates a list of all attributes of a feature, separated by a , . For this it obviously requires access to all attributes:

Break it down

  • If you don’t hardcode attributes or the geometry inside your function, specify usesGeometry=False, referencedColumns=[] . As a rule of thumb, prefer to do things this way, this makes it easier to reuse functions in the future.
  • If you do hardcode geometry or attributes, specify this manually.
Posted in Expressions, Non-commercial, Python, QGIS

QGIS Expressions Engine: Performance boost

Expressions in QGIS are more and more widely used for all kinds of purposes.

For example the recently introduced geometry generators allow drawing awesome effects with modified feature geometries on the fly.

The last days at the QGIS developer meeting 2017, I spent some time looking into and improving the performance for expressions. This was something that was on my todo list for a while but I never got around to working on it.

Short story:

  • Some expressions used as part of the 2.5D rendering became almost 50% faster
  • Overall, 2.5D rendering experiences a performance improvement of 30%

Read on if you are interested in what we have done and to get some insights into the internal handling of the expression engine.

The complexity will gradually be increased throughout the article.

Preparing expressions

Since a long time, QgsExpression  has a method prepare() . This method should be called whenever an expression is evaluated for a series of features once just before. The easiest example for this is the field calculator:

  1. Create an expression
  2. Prepare the expression
  3. Loop over all the features in the attribute table

Historically, this method has resolved the attribute index in a layer. If a layer has the attributes "id" , "name"  and "height"  and the expression is "name" || ': ' || "height" , this would just convert it to column(0) || ': ' || column(2) . Accessing attributes by index is generally a bit faster than by name, and the index is guaranteed to be static throughout a single request, so it’s an easy win.

Static nodes

The first thing that happens to an expression in it’s lifetime is, it’s translated from text to a tree of nodes.

A simple example 3 * 5

  • NodeBinaryOperator (operator: *)
    • left: NodeLiteral( 3 )
    • right: NodeLiteral( 5 )

It’s trivial to see, that we do not need to calculate this everytime for each feature since there are no fields or other magic ingredients involved, it’s always 15: it’s static.

Precalculating and caching

If we check if an expression only consists of static nodes, we can just precalculate it once and then reuse this value. This also works for partial expressions, let’s say 1 + 2 + "three"  can always be simplified to 3 + "three" .

We just have to find out for every node, if it’s static and then scan for nodes that are only made up of static descendants themselves. The only thing that is not static are the attributes (so NodeColumnRef), right?

Performance win number 1: precalculate and cache static values.

Functions

In a first step, each function was tagged as non-static and is therefore expected to return a new value for each iteration. This approach is safe, but you guess it, there is plenty of room for improvements.

Within the 2.5D renderer for example, the following is used (simplified here):

This expression will translate (displace) the footprint of a building by 10 meters with an angle of 70 degrees (that’s where the roof of a large building would be painted).

The whole part cos(radians(70))  can be simplified to 0.34202 . Of course we could have directly entered this value on the user side, but it’s much more readable and maintainable if we put the calculation there and let the computer do the hard work.

On the other hand, the outer block translate( $geometry, [x], [y] )  cannot be pre-calculated. It depends on the footprint of the building, so there’s nothing that could be done there.

Conclusion: a function will return a static value unless one of its parameters is non-static. cos  and sin  are static because their content is static, translate not, because there’s also the $geometry .

Performance win number 2: precalculate and cache functions when they only have static parameters.

Dynamic functions

Meet the rand()  function. It will always return a different value although it has no non-static parameters. It will just return a new value every time and we do not want it to be cached.

The conclusion is easy: find all the functions that are not static and tag them as such.

Caveat: some functions behave differently.

Variables

Next thing in the queue are variables. Variables are an awesomely cool concept that allows to get some properties like the filename of the current layer or project or additional, manually defined project or system-specific values. They are mostly static. Right? Of course not. Some of them get set at rendering time. For example there is a variable @symbol_color , that can be used to get the current symbol color. It allows for creating really cool effects, but we don’t want this value to be cached.

Performance win number 3: precalculate and cache variables

Caveat: Only when they really are static

The strange kids in the block

And finally there are also the really funky functions. There is for example eval, which takes a string as parameter which can be parsed as an expression. Some examples are eval('7')  which returns 7  (an integer), eval('1>3')  which returns false  and eval("condition")  which reads the content of the field "condition"  and treats it as an expression. So a new level enters the equation. Not only the parameter node itself (which is treated as a string) needs to be static, but also the expression that is created from parsing this string.

Caveat: when there is a function like eval()  or order_parts()  that take expression strings as parameters, be extra careful and check if expression string as well as the expression in the string are static.

Only pre-calculate if everything is really static. If the expression string is static, but the content is not we can still do something.

For example when rendering with the 2.5D renderer and setting the building height based on the number of stories (assuming an average room height of 2.7 meters), there would be an expression eval(@25d_height)  with the variable @25d_height  being set to "stories" * 2.7 . The string is static ( @25d_height  is a static layer variable). But we can’t precaculate the value ( "stories"  is not static). However, we can still prevent the expressions engine from reparsing the expression with every iteration and potentially we can even precalculate parts of such an expression. Especially the fact, that the expression does not need to be parsed over and over again results in a big win.

Performance win 4: Parse and prepare evaluated expressions if they are static.

Conclusion

It was well worth investing the time into improving the more and more used expressions engine. Having a responsive system increases user experience and productivity.

I only had the chance to work on this thanks to the QGIS developer meeting in Essen. Such events wouldn’t be possible without people and organisations sponsoring the QGIS project and a motivated community. You are all awesome!

This will be part of QGIS 3.0 which is expected to be released later this year.

Outlook

While this is a great step forward, it doesn’t stop here.

  • It should be possible to use this new mechanism to put some load from the local QGIS installation to a database server (see our previous project to compile expressions).
  • The whole mechanism only works, if an expression is actually prepared. Unprepared expressions will need to be identified and prepared to make use of this system.

If you would like to support such an initiative, please do not hesitate to contact us. We will love to make QGIS even faster for you!

Posted in Expressions, GIS, Non-commercial, QGIS

QGIS2 compatibility plugin

Lately I’ve been spending time porting a bigger plugin from QGIS 2.8 to 3 while maintaining 2.8 compatibility.
You can find it at https://github.com/opengisch/qgis2compat/
and http://plugins.qgis.org/plugins/qgis2compat/

One code to rule them all.

My target was to have to edit the source code as little as possible to simulate a lazy or busy coder that has to upgrade his/her plugins.

Lots of work has already gone into 2.14 to support PyQt 4 and 5 with the same code (Kudos to jef-n and mkuhn).

The qgis python package will then use the appropriate PyQt for you. But not all can be fixed in QGIS itself and this is why QGIS2compat can help you.

Use cases

QGIS2compat targets two main use cases.

PyQt compat

If you still need to rely on QGIS < 2.14 writing from qgis.PyQt will not work for you as the qgis.PyQt package is simply not there. And this is one of the two use case where QGIS2compat can help you. This feature is complete.

QGIS 2-3 API compatibility

The other use case of QGIS2compat plugin is the availability of a QGIS API compatibility layer which lets you write your code for QGIS 3 API and it will take care of adapting it to the QGIS 2 API. This feature is an ongoing work in progress since we are in the middle of API breakage period. So we do need your help to keeping adding new apicompat fixes (see below).

Usage

In your plugin’s __init__.py you should put something like the example
below. This will pick the QGIS PyQt compatibility layer which is
available since QGIS 2.14 or fall back to qgis2compat.

Also if you are in QGIS >= 2.14 and QGIS < 3 it will run the apicompat
package which will take care of the Python api changes between QGIS 2
and QGIS 3.

in each module where you do PyQt imports you should use the following
structure.

This will guarantee that the imports come from the most appropriate and
up-to-date place and gives you PyQt4 and PyQt5 support for QGIS >= 2.8.

Updating your plugin

This can be done automatically by the 2to3 tool included in QGIS sourcecode.
Please note that it is not the plain 2to3 python tool and can be found
at https://github.com/qgis/QGIS/blob/master/scripts/2to3
This tool will fix many (probably not all) issues with your code and make it
compatible with Python 3.

After running 2to3, update your __init__.py as explained above.

once done, it is time to run your tests (which you of course have written
before migrating) and fix the minor glitches that might have appeared.

Adding new apicompat fixes

To add a new api compatibility fix, just create (or add to an existing one) a
new module in apicompat and import it in __init__.py__ like it is done for
the qgsvectorlayer.py

As QGIS2compat works on a fairly low level, we require unit tests for each
fix to be included in each pull request.

Need professional help?

OPENGIS.ch can help you updating all your plugins and with any QGIS related problem you have. Contact us for a quote

Posted in Non-commercial, PyQt, QGIS, QGIS Plugins

Updating PyQt signals that use lambda in QGIS with 2to3

Just for the sake of documenting things, when running qgis 2to3 on a plugin I encountered a tricky situation regarding signals.

The original code:

The generated code:

so in do_load_project we get False instead of “my test argument”, why?
well due to a subtle difference in the generated code. in the original code we had the signature triggered() which has no arguments, so in our lambda extra_arg gets passed my_arg.
in the generated code, triggered actually has an optional param checked [1] which if emitted gets passed to extra_arg causing the problem.

The correct code (note the additional argument in the lambda definition)

some reference:
[0] http://pyqt.sourceforge.net/Docs/PyQt5/signals_slots.html
[1] http://doc.qt.io/qt-4.8/qaction.html#triggered

Posted in Non-commercial, PyQt, QGIS Plugins
Contact
OPENGIS.ch GmbH
Mythenstrasse 37A
8840 Einsiedeln
Switzerland

Email: [email protected]
Twitter: @OPENGISch
Mobile: +41 (0)79 467 24 70
Skype: mbernasocchi
Support QField development