Injeqt 1.0.0

There is not much to say. I'm working with Injeqt (Qt dependency injection library) for some time now, preparing to release Kadu 2.0, and I've not encountered any problems at all. It is great to have only one feature and have full tests coverage for it. So here it is, 1.0.0 release!

I really enjoy using this small library, but I've also learned that I wish it have some more features:

Subinjectors

I would like to use main Kadu injector in plugins code in a normal way. I would like to have a class with following header file and setters invoked by injector:

class ChatNotifier : public Notifier
{
Q_OBJECT
private slots:
...
INJEQT_SETTER void setChatWidgetRepository(
ChatWidgetRepository *chatWidgetRepository);
INJEQT_SETTER void setFormattedStringFactory(
FormattedStringFactory *formattedStringFactory);
};

auto plugin_injector = injeqt::injector(
plugin_modules, main_injector);
auto notifier = plugin_injector.get<ChatNotifier>();

Currently it is not possible - list of classes supported by injector is immutable and set on injector creation. And class ChatNotifier need to be added later, as it is in dynamically loaded plugin. So this code must be used:

class ChatNotifier : public Notifier
{
Q_OBJECT
public:
...
void setChatWidgetRepository(
ChatWidgetRepository *chatWidgetRepository);
void setFormattedStringFactory(
FormattedStringFactory *formattedStringFactory);
};

auto notifier = new ChatNotifier{this};
notifier->setChatWidgetRepository(injector
.get<ChatWidgetRepository>());
notifier->setFormattedStringFactory(injector
.get<FormattedStringFactory>()));

Subinjectors will solve this problem. Injectors will be able to have a parent injector and known all of its objects so it will be able to use them in an easy way.

As this is the feature that I miss most, it will be included in Injeqt 1.1 that will be released just before Kadu 3.0 (hopefully in about 3 or 4 months).

Autoconnections

Injeqt should have an option to recognize matching signals and slots in objects that it creates and connect them automatically. I'm not 100% about the semantics of it (should it create objects just to connect to them even if application does not use them yet or should it wait for application to ask for an object to create it and connect). This is much more complicated to implement than subinjectors, so it is postponed to some next release.

Introducing the Qt Quick 2D Renderer


When Qt Quick 2 was introduced with the release of Qt 5.0, it came with a minimum requirement of either OpenGL 2.0 or OpenGL ES 2.0.  For desktop and mobile platforms this is usually not an issue, and when it is for example on Windows, it is now fairly easy to use an OpenGL software rasteriser as a fallback.  If however your target is an embedded device without a GPU capable of OpenGL ES 2.0, then software rasterisation of OpenGL is an unwise option.  It is typical that the embedded devices without a GPU have less CPU resources available as well, so the overhead introduced by the software rasterisation of OpenGL leads to unacceptable performance for even the most simple content.  Also many of the performance optimisations gained by using OpenGL for rendering Qt Quick 2 scenes are negated by software rasterisation.

So as a solution to our Professional and Enterprise customers we are now providing an alternative scene graph renderer called the Qt Quick 2D Renderer.  The Qt Quick 2D Renderer works by rendering the Qt Quick scene graph using Qt’s raster paint engine instead of using OpenGL.   Using the Qt Quick 2D Renderer is as simple as building the module and setting an environment variable:

export QMLSCENE_DEVICE=softwarecontext

Now instead of loading the default render which uses OpenGL, Qt Quick will load our renderer plugin instead.  This plugin makes it possible to run Qt Quick 2 applications with platform plugins without OpenGL capability, like LinuxFB.

But wait! Doesn’t the QtQuick module itself depend on OpenGL?

Unfortunately, the Qt Quick module cannot be built without Qt itself being configured with OpenGL support.  So even though most calls to OpenGL inside of Qt Quick module are now moved to the renderer, Qt Quick still has APIs that cannot be changed for the Qt 5 release series and depend on OpenGL.  Fortunately as long as you do not use those APIs, no OpenGL functions will be called.

So along with the Qt Quick 2D Renderer module we provide a set of dummy libraries and headers that will allow you to build Qt with OpenGL support, enabling you to build and use the QtQuick module.  However if you accidentally call any OpenGL functions, do not be surprised when your application crashes.

Limitations

So there are some downsides to not using OpenGL.  First and maybe most obvious is that any scene graph nodes that require the use of OpenGL are ignored.  Since the Qt Quick 2D Renderer is not actually rasterising the OpenGL content, but rather providing an alternative set of render commands to provide the same result, it is not possible to use any OpenGL.  Existing functionality in Qt Quick 2 that requires OpenGL to be present like ShaderEffects or Particles can not be rendered.  So in many cases your Qt Quick UI containing these elements will still run, but the portions of your UI depending on these Items will not be displayed.

The second limitation you can expect is a serious performance penalty.  When rendering with OpenGL and a GPU, you will get painting operations like translations basically for free. Without OpenGL however operations like rotating and scaling an item become expensive and should be avoided whenever possible.  We also cannot easily do neat tricks to determine what not to paint.  We have to fall back to the painter’s algorithm and paint everything visible in the scene from back to front.

Another thing to keep in mind is that partial updates of the UI are not supported.  That means that if something in the scene needs to be redrawn, everything in your Qt Quick window will be redrawn.  This is not likely to be changed, and is due to the primary use case of Qt Quick 2 being an OpenGL renderer. 

Hardware Acceleration

Even though the lack of OpenGL translates to some pretty big compromises regarding performance with Qt Quick 2, all hope is not lost.  Many devices still have hardware available to accelerate 2D graphics.  This hardware is typically capable of accelerating certain types of painting operations like copying pixmaps and filling rectangles.  The Qt Quick 2D Renderer is optimised to take full advantage of any 2D hardware acceleration that may be provided by a platform plugin.

For embedded Linux the DirectFB platform plugin can enable Qt to take advantage of 2D graphics acceleration hardware if available.  If you then use the Qt Quick 2D Renderer with the DirectFB plugin, the drawing of QQuickItems like Rectangle, Image, and BorderImage will be accelerated in many cases.  2D graphics hardware does have limitations to what transformations can be accelerated though, so keep in mind that if you set the Rotation on an Item you will not be able to take advantage of hardware acceleration.

Not Just Embedded

It is worth mentioning that while the Qt Quick 2D Renderer was developed with the “embedded devices without OpenGL” use case in mind, its use is not limited to embedded.  It is possible to test out the Qt Quick 2D Renderer on non-embedded platforms by using the same environment variable.  Keep in mind though that with the 5.4.0 release there are some rendering issues with screens that have a device pixel ratio greater than 1.0.  This should be resolved in the upcoming 5.4.1 release.

Who should use this?

For an embedded device project if the requirement is a fluid UI with 60 FPS animations like those seen in the average smartphone, then you absolutely need hardware that supports OpenGL ES 2.0.  If however you have existing hardware without a GPU capable of OpenGL ES 2.0 or just lesser expectations on lower cost hardware, then the Qt Quick 2D Renderer is the way to go when using Qt Quick 2.

Qt Quick 2D Renderer also provides the opportunity to share more code between the targets in your device portfolio.  For example if you are deploying to multiple devices that may or may not have OpenGL support, you can use the same Qt Quick 2 UI on all devices, even on the ones where previously you would either need to have a separate UI using QtWidgets or the legacy QtQuick1 module.

For desktop there are a few cases where it may make sense to use the Qt Quick 2D Renderer.  On Windows it can be used as an alternative to falling back to ANGLE or Mesa3D in the situation where neither OpenGL 2.0 nor Direct3D 9 or 11 are available.  It also makes it possible to run Qt Quick 2 applications via remote desktop solutions like VNC or X11 forwarding where normally the OpenGL support is insufficient.

Looking Forward

The Qt 5.4.0 release is just the start for the Qt Quick 2D Renderer.  Work is ongoing to improve the performance and quality to provide the maximum benefit to device creators writing Qt Quick 2 UIs for embedded devices without OpenGL.  One of the things that is being worked on now is enabling the use of QtWebEngine with the Qt Quick 2D Renderer which currently is unavailable because of a hard dependency on OpenGL.  Here is a preview of QtWebEngine running on a Colibri VF61 module from Toradex:

Fun with Android

AKA: I know what you did last Christmas !

Hello folks,

I’d like to share with you a few things that I did during this Xmas.

Besides the fact I’ll never ever take a vacation before Xmas, because Brasov is a tourist attraction and it was FULL of tourists in that period. I could ski only once :( , because it snowed just a few days before New Year and then the police closed the roads to the ski resort because there were way too many cars in that area.

Anyway it was also great because I spent a lot of time with my family. But because there was no snow outside, I also had some time for myself and I didn’t want to waste it all. So, for the sake of old times, I started to buy some old games (from gog.com) and I began to play them with my son. But we soon finished most of them and we started to play 0 A.D. a super cool and free (as in freedom and also as in beer) strategy game. After a while my son asked me if he can play that game on a tablet. I said:

“at this moment you can’t play it, but I’ll take a look :)

This is how my new journey began, and this is what this article is about :) .

Strangely, exactly five years ago I began another journey, on the same day I started the Qt on Android port ;-).

Chapter I

So, I began to check the source code and I posted my intention on wildfiregame’s forum and then, I started the 0 A.D. game engine port:

  • the first step was to compile all 0 A.D. dependencies, TBH this was the most time-consuming step and it wasn’t fun at all. Qt framework has all the features that 0 A.D. needs and my first approach was to use Qt instead of all 0 A.D.’s dependencies but 0 A.D. developers didn’t like it and I had to cross-compile all the dependencies for Android. This step was already begun some time ago by someone else, but it was quite unfinished. The good news is that I learned how to cross-compile automake, cmake, etc. projects for Android ;-). Again this job is anything but fun :) .
  • the next step was to add to 0 A.D. a first citizen class support for Android, that I can use to code, deploy, run & debug on Android. Even though I know how to debug Android apps from a terminal, I’m way WAY to lazy to do such a thing. Of course the only choice I had, was QtCreator. When I designed QtCreator’s Android plugin, I did it in such a way that it can be used also by non-qt apps, but I never had the chance to put that theory to the test :D . So, after I had added qmake project files, (nope, I don’t like qmake that much, I chose qmake simply because it is the only build system that can be used by QtCreator to target Android devices), it was time to test if I can use QtCreator to develop non-qt apps on Android. Folks, I’m happy to let you know that QtCreator works just fine with non-qt apps to target Android! In this step I had to update the Android (java) part of the project to add debug support needed by QtCreator.
  • the next step was to fix all the crashes and enable all 0 A.D. features. Using QtCreator and being able to debug the application in a decent way, this job was quite easy, and soon I fixed all (visible) crashes and I enabled all 0 A.D.’s features!
  • but la vie is not always en rose and I faced a problem that was way beyond my (current) knowledge. 0 A.D. had some problems with GLES on Android. Together with an 0 A.D. developer, we started hunting this errors. But this job was extremely hard and boring … The problem was that I had to add tones to glGetError() to find the place where the error occurred… after a long time, we managed to hunt down one of the errors. But a few more still remained. Thanks to my legendary laziness, I started to look for another much easier solution. That guy asked me if Android has any OpenGL debuggers/tracers. After that moment I become obsessed by the idea of finding a decent solution to debug/trace OpenGL calls on Android.

Chapter II

AKA: The quest of hunting down 0 A.D.’s GLES problems!

So, the big question is: Are there any decent OpenGL debuggers/tracers for Android? A quick search on the net answered my question. It seems there are a few OpenGL debuggers/tracers for Android. So, I started to check if any of them are decent and useful :) .

  • I began my research with Android’s own tool. To start tracing is quite easy: you need to start the application on the device, click on trace icon (in that tool), choose a file (on your desktop), check a few options and wait … After the trace was finished I was very anxious to see which one of the +10k OpenGL calls/frame caused the problem! Sadly I had a very unpleasant surprise. The thing didn’t point me to the problem at all :( . Even more, going from one frame to another takes an eternity… so, at least for me, this tool was no good.
  • then I found Mail’s graphics debugger. After a quick look on their website my heart was full of hope! MGD is not that easy to use, but I managed to get it deployed on Android. I started the GUI and I also got it connected to application on the device. But after this step my hope was ruined and my heart broken. After it connects to GUI the application crashes on the device; it seems it has some unimplemented EGL/GLES APIs :( . Of course my first thought was that I did something wrong and I started to dig deeper, but no luck, so, one more OpenGL debugger off my list.
  • the next tool on my list was powervr. I looked at it with a lot of hope but that soon vanished because, after I downloaded, the “SDK” asked me for root permissions to install. I wasn’t that desperate to give it root permissions!
  • I briefly checked NVidia’s Tegra graphics debugger, but I didn’t even dare to download it because I don’t have any Tegra Devices.
  • The last tool I checked was apitrace. Folks it was love at first sight! Even though apitrace is not as easy to use as Google’s tool, I managed to integrate it quite easy (and in a decent way) into 0 A.D.’s project. After I created the trace, I pulled it from the device to check it. TBH after the previous failures, my hope was not that great, but I took the courage to check it anyway. Folks, apitrace’s GUI it a gazillion times faster than Google’s tool! I was extremely surprised to see my laptop really replaying the trace and finding some errors! I began to check those errors and I soon found out that my desktop GLES implementation is not the same as the one on Android :( … Most probably most Android implementations are not the same (e.g. Mali’s implementation is probably different from Adreno’s). The only hope to catch those errors was, somehow, by doing the retrace on Android alone …

Chapter III

Retracing on Android … AKA: Mission Impossible

So, I started to check the apitrace (retrace) source code to see how and if it can be ported on Android. The challenge was to pass the application arguments and to redirect somehow the stdout and stderr from the device to the desktop.

Apitrace (retrace) uses stdout, stderr to send back results to the user/UI. So, the only way to do it, it was to hook stdout and stderr on Android and redirect all the traffic to a pair of sockets, then forward those sockets (using adb) to the desktop. I also chose to use stdout socket to send the params to application on device. But what if the application doesn’t have INTERNET permission set, which is needed for sockets? What if those ports are already in use? This was an easy task, because I asked myself the same question a few months ago, when I worked on Android 5.0 support, so I chose to use LocalServerSocket instead of the old fashioned sockets. This way all the potential problems were avoided.

So, I created an android GLES retrace on the C++ side (based on egl_x11), a custom Android Activity, a custom Surface, etc. Of course, I also added QtCreator support (yeah more qmake projects :D ), to be able to debug, and soon, I had a working Android retracer!

I could use it via telnet with just a few steps (I’m listing those steps because they can be used to debug the connection):

  • start apitrace application on Android.
  • forward ports (once is enough):
    adb forward tcp:1234 localabstract:apitrace.github.io.eglretrace.stdout
    adb forward tcp:1235 localabstract:apitrace.github.io.eglretrace.stderr
    
  • connect to both channels
    telnet 127.0.0.1 1235
    telnet 127.0.0.1 1234
    
  • then send the arguments via stdout channel (1234 port in our example)
    -d /sdcard/0ad/appdata/logs/pyrogenesis.trace
    

But, as you can see, is not very easy to use and I couldn’t use the UI …

Again, my titanic laziness come into place, and I began to work on a decent way to do retrace on Android using the existing UI.

IMHO a decent retracing on Android needs to meet the following requirements:

  • a way to pull the trace from device to desktop
  • a way to push a trace from desktop to an Android device, if the trace was done on another device and you have it on your desktop.
  • a way to link a desktop trace file with an Android trace file
  • automatically starts the retrace application on device
  • forwards the ports (using adb) to desktop
  • connect to those ports, send the params
  • read the data from ports instead of QProcess stdout, stderr

Folks, I’m proud to let you know that I managed to finish all those requirements! I’m quite pleased how it ended up! All the mysterious errors were revealed!

Of course because I want all the people to enjoy a decent Android OpenGL debugging, I created the pull request for all my work.

Using apitrace on Android I knew which OpenGL commands caused the errors! This helped an 0 A.D. developer, (Philip` is his nickname on irc), to fix these errors in a few minutes. I needed his help because my OpenGL skills are close to 0 (A.D.) and making an Android retracer didn’t make me smarter at all in that area :).

Of course this is the first step to get 0 A.D. on Android, there is a lot of work to be done before we’ll be able to enjoy it on our tablets!

If anyone wants to join this fantastic free project, please check 0 A.D.’s participate page. 0 A.D. is a project where anyone can help not only programmers! If you are an artist that knows how to paint, create music, write scenarios, have a nice voice, etc., or you want to help with translations for the game, in your own language, you can join and contribute!

Also apitrace needs contributors, if you want to help, then you can check their TODO list to see if you can make the world a better place ;-) !

My apitrace repo is here and the pull request here.

My 0 A.D. repo is here and the patches here and here you have info about how to compile 0 A.D. on Android.

About KDAB

KDAB is a consulting company dedicated to Qt and offering a wide variety of services and providing training courses in:

KDAB believes that it is critical for our business to invest into Qt3D and Qt, in general, to keep pushing the technology forward and to ensure it remains competitive. Unlike The Qt Company, we are solely focused on consultancy and training and do not sell licenses.

The post Fun with Android appeared first on KDAB.

Qt Weekly #23: Qt 5.5 enhancements for Linux graphics and input stacks


The upcoming Qt 5.5 has received a number of improvements when it comes to running without a windowing system on Linux. While these target mainly Embedded Linux devices, they are also interesting for those wishing to run Qt applications on their desktop machines directly on the Linux console without X11 or Wayland.

We will now take a closer look at the new approach to supporting kernel mode setting and the direct rendering manager, as well as the recently introduced libinput support.

eglfs improvements

In previous versions there used be a kms platform plugin. This is still in place in Qt 5.5 but is not built by default anymore. As features accumulate, getting multiple platform plugins to function identically well gets more complicated. From Qt and the application’s point of view the kms and eglfs platforms are pretty much the same: they are both based on EGL and OpenGL ES 2.0. Supporting KMS/DRM is conceptually no different than providing any other device or vendor-specific eglfs backend (the so-called device hooks providing the glue between EGL and fbdev).

In order to achieve this in a maintainable way, the traditional static, compiled-in hooks approach had to be enhanced a bit. Those familiar with bringing Qt 5 up on embedded boards know this well: in the board-specific makespecs under qtbase/mkspecs/devices one comes across lines like the following:

  EGLFS_PLATFORM_HOOKS_SOURCES = $$PWD/qeglfshooks_imx6.cpp

This compiles the given file in to the eglfs platform plugin. This is good enough when building for a specific board, but is not going to cut it in environments where multiple backends are available and hardcoding any given one is not acceptable. Therefore an alternative, plugin-based approach has been introduced. When looking at the folder qtbase/plugins/egldeviceintegrations after building Qt 5.5, we find the following (assuming the necessary headers and libraries files were present while configuring and building):

  libqeglfs-kms-integration.so
  libqeglfs-x11-integration.so

These, as the names suggest are the eglfs backends for KMS/DRM and X11. The latter is positioned mainly as an internal, development-only solution, although it may also become useful on embedded boards like the Jetson TK1 where the EGL and OpenGL drivers are tied to X11. The former is more interesting for us now: it is the new KMS/DRM backend. And it will be selected and used automatically when no static hooks are specified in the makespecs and the application is not running under X. Alternatively, the plugin to be used can be explicitly specified by setting the QT_QPA_EGLFS_INTEGRATION environment variable to, for instance, eglfs_kms or eglfs_x11. Note that for the time being the board-specific hooks are kept in the old, compiled-in format and therefore there is not much need to worry about the new plugin-based system, unless KMS/DRM is desired. In the future however it is expected to gain more attention since newly introduced board adaptations are recommended to be provided as plugins.

libinput support

libinput is a library to handle input devices, providing device detection, pointer, keyboard and touch events, and additional functionality like pointer acceleration and proper touchpad handling. It is used by Weston, the reference Wayland compositor, and in the future potentially also in X.org.

Using libinput in place of the traditional evdevmouse|keyboard|touch input handlers of Qt 5 has a number of advantages. By using it Qt applications get the same behavior, configuration and calibration that other clients, for example Weston use. It also simplifies bringup scenarios since there will be no need to fight Qt’s input stack separately in case libinput is already proven to work.

On the downside, the number of dependencies are increased. libudev, libevdev, optionally libmtdev are all necessary in addition to libinput. Furthermore keyboard mapping is performed via xkbcommon. This is not a problem for desktop and many embedded distros, but can be an issue on handcrafted systems. Or on an Android baselayer. Therefore libinput support is optional and the evdev* handlers continue to be the default choice.

Let’s see it in action

How can all this be tested on an ordinary Linux PC? Easily, assuming KMS/DRM is usable (e.g. because it is using Mesa with working KMS and DRM support). Below is our application (a standard Qt example from qtbase/examples/opengl/qopenglwidget) running as an ordinary X11 client, using the xcb platform plugin, on a laptop with Intel integrated graphics:

Qt app with widgets and OpenGL on X11

Now, let’s switch to another virtual console and set the following before running the application:

  export QT_QPA_PLATFORM=eglfs
  export QT_QPA_GENERIC_PLUGINS=libinput
  export QT_QPA_EGLFS_DISABLE_INPUT=1

This means we will use the eglfs platform plugin, disabling its built-in keyboard, mouse and touchscreen support (that reads directly from the input devices instead of relying on an external library like libinput), and rely on libinput to get mouse, keyboard and touch events.

If everything goes well, the result is something like this:

Qt app with widgets and OpenGL on KMS/DRM

The application is running just fine, even though there is no windowing system here. Both OpenGL and the traditional QWidgets are functional. As an added bonus, even multiple top-level widgets are functional. This was not supported with the old kms platform plugin, whereas eglfs has basic composition capabilities to make this work. Keyboard and mouse input (in this particular case coming from a touchpad) work fine too.

Troubleshooting guide

This is all nice when it works. When it doesn’t, it’s time for some debugging. Below are some useful tips.

(1)
Before everything else, check if configure picked up all the necessary things. Look at qtbase/config.summary and verify that the following are present:

  libinput................ yes

  OpenGL / OpenVG: 
    EGL .................. yes
    OpenGL ............... yes (OpenGL ES 2.0+)

  pkg-config ............. yes 

  QPA backends: 
    EGLFS ................ yes
    KMS .................. yes

  udev ................... yes

  xkbcommon-evdev......... yes

If this is not the case, trouble can be expected since some features will be disabled due to failing configuration tests. These are most often caused by missing headers and libraries in the sysroot. Many of the new features rely on pkg-config so it is essential to get it properly configured too.

(2)
No output on the screen? No input from the mouse or keyboard? Enable verbose logging. Categorized logging is being taken into use in more and more areas of Qt. This includes also most of the input subsystem and eglfs. Some of the interesting categories are listed below:

  • qt.qpa.input – Enables debug output both from the evdev and libinput input handlers. Very useful to check if a given input device was correctly recognized and opened.
  • qt.qpa.eglfs.kms – Enables logging from the KMS/DRM backend of eglfs.
  • qt.qpa.egldeviceintegration – Enables plugin-related logging in eglfs.

Additionally, the legacy environment variable QT_QPA_EGLFS_DEBUG can also be set to 1 to get additional information printed, for example about the EGLConfig that is in use.

(3)
Check file permissions. /dev/fb0 and /dev/input/event* must be accessible by the application. Additionally, make sure no other application has a grab (as in EVIOCGRAB) on the input devices.

(4)
Q: I launched my application on the console without working keyboard input, I cannot exit and CTRL+C does not work!
A: Next time do export QT_QPA_ENABLE_TERMINAL_KEYBOARD=1 before launching the app. This is very handy for development purposes, until the initial issues with input are solved. The downside is that keystrokes go to the terminal, so this setting should be avoided afterwards.

The future and more information

While the final release of Qt 5.5 is still some months away, all the new features mentioned above are there in the dev branch of qtbase, ready to be tested by those who like bleeding edge stuff. The work is not all done, naturally. There is room for improvements, for example when it comes to supporting screens connected or disconnected during the application’s lifetime, or using alternative keyboard layouts. These will come gradually later on.

Finally, it is worth noting that the Embedded Linux documentation page, which has received huge improvements in the few recent major Qt releases, has been (and is still being) updated with information about the new graphics and input capabilities. Do not hesitate to check it out.

Shadow Mapping in Qt3D 2.0

Continuing our blog post series about the rewrite of Qt3D.

One of the biggest driving factors behind the design of Qt3D 2.0 is the ability to configure the renderer in order to accommodate custom rendering techniques. In this blog post I will explain how to render a scene in Qt3D with shadows.

Shadow mapping in Qt3D. Note the self-shadowing of the plane and of the trefoil knot.

Shadow mapping in Qt3D. Note the self-shadowing of the plane and of the trefoil knot.

The complete working source code for this blog post is available in the Qt3D repository, under the examples/shadow_map_qml directory. The entire rendering will be configured using QML (i.e. this is a QML-only example), but it’s perfectly possible to also use C++ to achieve the very same result.

Shadow mapping

Shadows are not directly supported by OpenGL, and these days there are countless techniques that can be employed to generate them. Shadow mapping is one of the oldest; it’s still widely used due to its simplicity and ability to generate good-looking shadows, while having a very small performance cost. The Wikipedia entry on shadow mapping has a very good overview of the modern incantations of this technique. However, for our purposes, we are going to stick to a very basic version of it.

Shadow mapping is typically implemented using a two pass rendering. In the first pass we generate the shadow information, and in the second pass we can render the scene “normally” (i.e., using any rendering technique of our choice), while at the same time using the information gathered in the first pass to draw the shadows.

The idea behind shadow mapping is the following: only the closest fragments to the light are the ones lit. Fragments “behind” other fragments are occluded, and therefore in shadow.

Therefore, in the first pass we draw the scene from the point of view of the light. The information that we then store is simply the distance of the closest fragment in this “light space”. In OpenGL terms, this corresponds to having a Framebuffer Object, or FBO, with a depth texture attached to it. In fact, the “distance from the eye” is the definition of the depth; and the default depth testing done by OpenGL will actually store only the depth for the closest fragment.

(A color texture attachment is not even needed — we don’t need to shade fragments, only to calculate their depth.)

Exaggerated shadow map texture of the very same scene represented above.

Exaggerated shadow map texture of the very same scene represented above.

The image above is the shadow map. That is, the depth stored when rendering the scene from the light point of view; darker colours represent a shallow depth (i.e. closer to the camera). In our scene, the light sits somewhere above the objects in the scene, on the right side w.r.t. the main camera (cf. the previous screenshot). This matches with the fact that the toyplane is closer to the camera than the other objects.

Once we have generated the shadow map, we then do the second rendering pass. In this second pass we render using the normal scene’s camera; we can use any desired effect here, like for instance Phong shading. The important bit is that in the fragment shader we apply the shadow map algorithm, that is, we ask: is that fragment the closest fragment to the light? If so, then it must be drawn lit; otherwise, it must be drawn in shadow.

How to answer that question is easy once we have the shadow map generated in the first pass. All it suffices is to remap the fragment in light space, therefore calculating its depth from the light point of view, as well as where its coordinates are on the shadow map texture. We can then sample the shadow map texture at the given coordinates and compare the fragment’s depth with the result of the sampling: if the fragment is further away, then it’s in shadow, otherwise it is lit.

This is the theory about shadow mapping. Let’s see how to turn this theory into code using Qt3D.

Getting started

Let’s start from the main.qml file, where we set up the entire scene.

import Qt3D 2.0
import Qt3D.Render 2.0

Entity {
    id: sceneRoot

    Camera {
        id: camera
        projectionType: CameraLens.PerspectiveProjection
        fieldOfView: 45
        aspectRatio: _window.width / _window.height
        nearPlane: 0.1
        farPlane: 1000.0
        position: Qt.vector3d(0.0, 10.0, 20.0)
        viewCenter: Qt.vector3d(0.0, 0.0, 0.0)
        upVector: Qt.vector3d(0.0, 1.0, 0.0)
    }

    Configuration  {
        controlledCamera: camera
    }

    Light {
        id: light
    }

    components: [
        ShadowMapFrameGraph {
            id: framegraph
            viewCamera: camera
            lightCamera: light.lightCamera
        }
    ]

    AdsEffect {
        id: shadowMapEffect

        shadowTexture: framegraph.shadowTexture
        light: light
    }

    // Trefoil knot entity
    Trefoil {
        material: AdsMaterial {
            effect: shadowMapEffect
            specularColor: Qt.rgba(0.5, 0.5, 0.5, 1.0)
        }
    }

    // Toyplane entity
    Toyplane {
        material: AdsMaterial {
            effect: shadowMapEffect
            diffuseColor: Qt.rgba(0.9, 0.5, 0.3, 1.0)
            shininess: 75
        }
    }

    // Plane entity
    GroundPlane {
        material: AdsMaterial {
            effect: shadowMapEffect
            diffuseColor: Qt.rgba(0.2, 0.5, 0.3, 1.0)
            specularColor: Qt.rgba(0, 0, 0, 1.0)
        }
    }
}

The first components we create are a Camera, which represents the camera used for the final rendering, and a Configuration element which allows us to control this camera using the keyboard or the mouse. The parameters of the camera are self-explainatory and there isn’t much to say about that.

We then create a Light entity, which represents our light — a directional spotlight, sitting somewhere above the plane, and looking down at the scene’s origin. This light entity is then used by our custom frame graph, ShadowMapFrameGraph, and our rendering effect, AdsEffect, whose instances are created just after the light.

Lastly, we create three entities for the meshes in the scene: a trefoil knot, a toy aircraft, and a ground plane. The implementation of these three entities is straightforward and will not be covered here; they simply aggregate a mesh, a transformation and a material that uses the effect defined above. Please refer to the previous blog posts for more information about these. For extra fun, the toyplane and the trefoil knot transformations are actually animated.

Light

The Light element is defined inside Light.qml:

import Qt3D 2.0
import Qt3D.Render 2.0

Entity {
    id: root

    property vector3d lightPosition: Qt.vector3d(30.0, 30.0, 0.0)
    property vector3d lightIntensity: Qt.vector3d(1.0, 1.0, 1.0)

    readonly property Camera lightCamera: lightCamera
    readonly property matrix4x4 lightViewProjection: lightCamera.projectionMatrix.times(lightCamera.matrix)

    Camera {
        id: lightCamera
        objectName: "lightCameraLens"
        projectionType: CameraLens.PerspectiveProjection
        fieldOfView: 45
        aspectRatio: 1
        nearPlane : 0.1
        farPlane : 200.0
        position: root.lightPosition
        viewCenter: Qt.vector3d(0.0, 0.0, 0.0)
        upVector: Qt.vector3d(0.0, 1.0, 0.0)
    }
}

As I said before, the light is a directional spotlight. Since in the first rendering pass we’re going to use the light as a camera, I decided to actually put a Camera sub-entity inside of it, and to expose it as a property. Apart from the camera, the light exposes as properties a position, its colour/intensity, and a 4×4 transformation matrix; we’ll see where that matrix gets used, while the rest is straightforward.

Frame graph

In Qt3D 2.0 the frame graph is the data-driven configuration for the rendering. In this example, ShadowMapFrameGraph.qml contains its implementation, which looks like this:

import Qt3D 2.0
import Qt3D.Render 2.0
import QtQuick 2.2 as QQ2

FrameGraph {
    id: root

    property alias viewCamera: viewCameraSelector.camera
    property alias lightCamera: lightCameraSelector.camera
    readonly property Texture2D shadowTexture: depthTexture

    activeFrameGraph: Viewport {
        rect: Qt.rect(0.0, 0.0, 1.0, 1.0)
        clearColor: Qt.rgba(0.0, 0.4, 0.7, 1.0)

        RenderPassFilter {
            includes: [ Annotation { name: "pass"; value: "shadowmap" } ]

            RenderTargetSelector {
                target: RenderTarget {
                    attachments: [
                        RenderAttachment {
                            name: "depth"
                            type: RenderAttachment.DepthAttachment
                            texture: Texture2D {
                                id: depthTexture
                                width: 1024
                                height: 1024
                                format: Texture.DepthFormat
                                generateMipMaps: false
                                magnificationFilter: Texture.Linear
                                minificationFilter: Texture.Linear
                                wrapMode {
                                    x: WrapMode.ClampToEdge
                                    y: WrapMode.ClampToEdge
                                }
                                comparisonFunction: Texture.CompareLessEqual
                                comparisonMode: Texture.CompareRefToTexture
                            }
                        }
                    ]
                }

                ClearBuffer {
                    buffers: ClearBuffer.DepthBuffer

                    CameraSelector {
                        id: lightCameraSelector
                    }
                }
            }
        }

        RenderPassFilter {
            includes: [ Annotation { name: "pass"; value: "forward" } ]

            ClearBuffer {
                buffers: ClearBuffer.ColorDepthBuffer

                CameraSelector {
                    id: viewCameraSelector
                }
            }
        }
    }
}

The code defines a FrameGraph entity, which has a tree of entities as the active frame graph. Any path from the leaves of this tree to the root is a viable frame graph configuration; filter entities can enable or disable such paths, and selector entities can alter the configuration.

In our case, the tree looks like this:

  • Viewport
    • RenderPassFilter
      • RenderTargetSelector
        • ClearBuffer
          • CameraSelector
    • RenderPassFilter
      • ClearBuffer
        • CameraSelector

So we have two paths from the topmost Viewport entity. Each path corresponds to a pass of the shadow map technique; the paths are enabled and disabled using a RenderPassFilter, an entity that can filter depending on arbitrary values defined in a given render pass (in our case: a string). The actual passes are not defined here, but in the effect (see below); the frame graph simply modifies its configuration when a given pass is rendered.

Now, in the shadow map generation pass, we must render to an offscreen surface (the FBO) which has a depth texture attachment: this in Qt3D is represented by the RenderTarget entity, which has a number of attachments. In this case, only one attachment is needed: a depth attachment, defined by the RenderAttachment entity using a type of RenderAttachment.DepthAttachment (stating it should store the depth), and a Texture2D entity which actually configures the texture storage used to store the depth information.

Moreover, in this first pass, we must render using the light’s camera; therefore, we have a CameraSelector entity that sets the camera to the one exported by the Light.

The second pass is instead way more straightforward, in which we simply render to the screen using the main camera.

The effect

The bulk of the magic happens in the AdsEffect.qml file, where our main Effect entity is defined. As you can imagine from the name, it’s an effect implementing the ADS shading model, i.e. Phong, with the addition of shadow mapped generated shadows.

An effect contains the implementation of a particular rendering strategy; in this case, shadow mapping using two passes.

import Qt3D 2.0
import Qt3D.Render 2.0

Effect {
    id: root

    property Texture2D shadowTexture
    property Light light

    parameters: [
        Parameter { name: "lightViewProjection"; value: root.light.lightViewProjection },
        Parameter { name: "lightPosition";  value: root.light.lightPosition },
        Parameter { name: "lightIntensity"; value: root.light.lightIntensity },

        Parameter { name: "shadowMapTexture"; value: root.shadowTexture }
    ]

    techniques: [
        Technique {
            openGLFilter {
                api: OpenGLFilter.Desktop
                profile: OpenGLFilter.Core
                majorVersion: 3
                minorVersion: 2
            }

            renderPasses: [
                RenderPass {
                    annotations: [ Annotation { name: "pass"; value: "shadowmap" } ]

                    shaderProgram: ShaderProgram {
                        vertexShaderCode:   loadSource("qrc:/shaders/shadowmap.vert")
                        fragmentShaderCode: loadSource("qrc:/shaders/shadowmap.frag")
                    }

                    renderStates: [
                        PolygonOffset { factor: 4; units: 4 },
                        DepthTest { func: DepthTest.Less }
                    ]
                },

                RenderPass {
                    annotations: [ Annotation { name : "pass"; value : "forward" } ]

                    bindings: [
                        // Uniforms (those provided by the user)
                        ParameterMapping { parameterName: "ambient";  shaderVariableName: "ka"; bindingType: ParameterMapping.Uniform },
                        ParameterMapping { parameterName: "diffuse";  shaderVariableName: "kd"; bindingType: ParameterMapping.Uniform },
                        ParameterMapping { parameterName: "specular"; shaderVariableName: "ks"; bindingType: ParameterMapping.Uniform }
                    ]

                    shaderProgram: ShaderProgram {
                        vertexShaderCode:   loadSource("qrc:/shaders/ads.vert")
                        fragmentShaderCode: loadSource("qrc:/shaders/ads.frag")
                    }
                }
            ]
        }
    ]
}

The parameters list defines some default values for the effect. Those values will get mapped to OpenGL shader program uniforms, so that in the shaders we can access them. In this case, we expose some information from the Light entity (its position, its intensity, its view/projection matrix defined by its internal camera), as well as the shadow map texture exposed by the frame graph.

In general, it’s possible to put such parameters all the way down, from a Material, to its Effect, to one of the effect’s Techniques. This allows a Material instance to override defaults in an Effect or Technique. (The bindings array provides the same thing, except that it also allows us to rename some parameters. In our case, it renames the ambient/diffuse/specular values defined in the material to the actual uniform names used by the shader programs.)

We then have a Technique element. In order to be able to adapt the implementation to different hardware or OpenGL versions, an Effect is implemented by providing one or more Technique elements. In our case, only one technique is provided, targeting OpenGL 3.2 Core (or greater).

Inside that technique, we finally have the definition of our two rendering passes. We “tag” each pass with an Annotation entity, matching the ones we’ve set into the frame graph configuration, so that each pass will have different rendering settings.

The first pass is the shadow map generation. To do so, we load a suitable set of GLSL shaders, which are actually extremely simple — they do nothing except from MVP projection, to bring meshes from their model space into clip space (and, remember, in this first pass, the light is the camera). The fragment shader is totally empty: there’s no color to be generated, and the depth will be automatically captured for us by OpenGL. Note that in this first pass, we also set some custom OpenGL state in the form of a polygon offset and depth testing mode.

The second pass is instead a normal forward rendering using Phong shading. The code in the effect entity is extremely simple: we simply configure some parameters (see above) and load a pair of shaders which will be used when drawing.

The shaders

I will not explain the shader code in too much detail, because that would require a crash course in GLSL. However, I will explain the shadow mapping parts. The first part happens in the vertex shader (ads.vert), where we output towards the fragment shader the coordinates of each vertex in light space:

    positionInLightSpace = shadowMatrix * lightViewProjection
        * modelMatrix * vec4(vertexPosition, 1.0);

(Actually, the coordinates get adjusted a little to allow us to easily sample the shadow map texture; that’s the purpose of the shadowMatrix, please refer to a book or to the Wikipedia entry on shadow mapping to understand why that’s necessary).

The second part happens in the fragment shader (ads.frag), where we sample the shadow map, and if the currently processed fragment is behind the one closest to the light, then the current fragment is in shadow (and only gets ambient contribution), otherwise it gets full Phong shading:

    float shadowMapSample = textureProj(shadowMapTexture, positionInLightSpace);

    vec3 ambient = lightIntensity * ka;
    vec3 result = ambient;

    if (shadowMapSample > 0)
        result += dsModel(position, normalize(normal));

    fragColor = vec4(result, 1.0);

And that’s it!

Conclusions

In this post I’ve shown how it’s possible to configure Qt3D in order to achieve a custom rendering effect. Although shadow mapping is one of the simplest rendering techniques, the point is demonstrating how Qt3D imposes no particular rendering algorithm or strategy. You can easily experiment with a variety of multipass effects, e.g. introduce stencil shadows, or maybe that effect you’ve just seen on that SIGGRAPH paper…

About KDAB

KDAB is a consulting company dedicated to Qt and offering a wide variety of services and providing training courses in:

KDAB believes that it is critical for our business to invest into Qt3D and Qt, in general, to keep pushing the technology forward and to ensure it remains competitive. Unlike The Qt Company, we are solely focused on consultancy and training and do not sell licenses.

The post Shadow Mapping in Qt3D 2.0 appeared first on KDAB.

Using Google Analytics to Monitor Qt Applications

There is often a desire of some companies to monitor usage of specific software applications to improve business over time. Reports and dashboards that provide visual representations of usage data can be used to inform product development, establish product value and influence business strategy. Google Analytics (GA), while originally intended for web analytics, offers a number of advantages as a framework for monitoring products with the use of Qt-based applications. 

Nice Blog Post From Boundary Devices


Check out this nice blog post about Qt for device creation written by Eric Nelson from Boundary Devices. The post nicely summarizes what Qt for device creation is all about. You can also find an easy to follow step-by-step guide on how to use the BYOS (Build Your Own Stack) tooling which is provided as a part of Qt for device creation offering.

QtSingleApplication replacement for Qt5

Kadu uses QtSingleApplication to ensure that only one instance can run on one profile at a time.

Unfortunately this solution is not compatible with Qt 5, so I've decided to update it a bit. My solution is available at gitorious. It uses code from Nokia solution, so I hope I got the licencing and copyright right (it is LGPL - one of the licences from Nokia's original files).

It has different interface from original one - 3 lambdas are accepted as constructor parameters. One for running first instance, one for running second and one for accepting messages. Not very Qt-ish but possibly less error prone.

Overview of Qt3D 2.0 – Part 2

An Example of Rendering with Qt3D

In the previous article we learned about the requirements and high-level architecture of Qt3D 2.0. In order to put some of this into context and to give you a concrete example of how it looks to draw something in Qt3D using the QML API, we will now briefly show the important parts of one of the simple examples that will ship with Qt3D. We will start off simple and just draw a single entity (a trefoil knot) but to make it slightly more interesting we will use a custom set of shaders to implement a single-pass wireframe rendering method. This is what we will draw:

trefoil-wireframe

As mentioned in the previous article, the renderer aspect looks for entities that have some geometry, a material and optionally a transformation. These are all specified in the form of subclasses of QComponent which have been exported to the QML engine in the form of Mesh, Material and Transform respectively. So let’s use these components to make a custom QML item in TrefoilKnot.qml

import Qt3D 2.0
import Qt3D.Render 2.0

Entity {
    id: root

    property alias x: translation.dx
    property alias y: translation.dy
    property alias z: translation.dz
    property alias scale: scaleTransform.scale
    property alias theta: thetaRotation.angle
    property alias phi: phiRotation.angle
    property Material material

    components: [ transform, mesh, root.material ]

    Transform {
        id: transform
        Translate { id: translation }
        Scale { id: scaleTransform }
        Rotate{ id: thetaRotation; axis: Qt.vector3d( 1.0, 0.0, 0.0 ) }
        Rotate{ id: phiRotation;   axis: Qt.vector3d( 0.0, 1.0, 0.0 ) }
    }

    Mesh {
        id: mesh
        source: ":/assets/obj/trefoil.obj"
    }
}

Let’s break this down to see what’s going on here. We start off by importing the Qt3D 2.0 module that provides the Entity type and value type helpers like Qt.vector3d(). We also import the Qt3D.Render 2.0 module that provides the components and other types picked up by the renderer aspect. If we were using components from other aspects, then we would also need to import the corresponding QML module here too.

We then use Entity as the root element of the custom QML type exposing some custom properties just as you would with any other type in QML.

Entities, Entities, Everywhere

In addition to aggregating components, Entity objects can be used to group child objects together. This is analogous to how Item is used in Qt Quick 2.

entities-everywhere

At the bottom of the TrefoilKnot.qml file we instantiate a Transform component and a Mesh component. The Mesh component is very simple. We use its source property to load in a static set of geometry (vertex positions, normal vectors, texture coordinates etc.) from a file in the Wavefront Obj format. This data was exported from the excellent and free Blender application. The Transform component specifies how the renderer should transform the geometry when it is drawn with the OpenGL pipeline. Exactly how this happens is a topic for a future article. For now, simply be happy that you are able to combine an ordered set of transformations into a single Transform component and that your shaders will have this information available to them automatically via some standard named uniform variables.

Dynamic Per-Vertex Data

In addition to the Mesh element, Qt3D also allows dynamic generation of per-vertex attribute data via some C++ hooks called by the task-based engine.

Simply instantiating components is not enough however. In order for them to imbue special behaviour on an entity, the entity must aggregate the components by means of its components property. This allows components to be shared between multiple entities very easily. In this example we have components for the transform and mesh which are contained within our custom type. The final component, of type Material is, in this case, provided by means of a property on the TrefoilKnot itself. This allows users of this type to easily customise the appearance of the entity, which we will make use of shortly.

Now that we have defined a custom entity, let’s see how to use it to actually get our desired result. The code for our main.qml file looks like this:

import Qt3D 2.0
import Qt3D.Render 2.0
import QtQuick 2.1 as QQ2

Entity {
    id: root

    // Use the renderer configuration specified in ForwardRenderer.qml
    // and render from the mainCamera
    components: [
        FrameGraph {
            activeFrameGraph: ForwardRenderer {
                camera: mainCamera
            }
        }
    ]

    BasicCamera {
        id: mainCamera
        position: Qt.vector3d( 0.0, 0.0, 25.0 )
    }

    Configuration  {
        controlledCamera: mainCamera
    }

    WireframeMaterial {
        id: wireframeMaterial
        effect: WireframeEffect {}
        ambient: Qt.rgba( 0.2, 0.0, 0.0, 1.0 )
        diffuse: Qt.rgba( 0.8, 0.0, 0.0, 1.0 )
    }

    TrefoilKnot {
        id: trefoilKnot
        material: wireframeMaterial
    }
}

We start off again with the same import statements as before but this time we also add in a namespaced import for the Qt Quick 2.1 module as we will need this shortly for some animations. Once again we also use Entity as the root element simply to act as a parent for its children. In this sense, Entity is much like the Item element type from Qt Quick.

Here, we will gloss over the FrameGraph component as that is worthy of an entire article on it’s own. For now, it suffices to say that the contents of the ForwardRenderer type is what completely configures the renderer without touching any C++ code at all. It’s pretty cool stuff but you’ll have to wait for the details as this is already a long article. Similarly, please ignore the Configuration element. This is a temporary hack that is needed to have mouse control of the camera until we finish implementing that part correctly using aspects and components.

The BasicCamera element is a trivial wrapper around the built-in Camera type, that as you can probably deduce, represents a virtual camera. It has properties for things like the near and far planes, field of view, aspect ratio, projection type, position, orientation etc.

Multiple Cameras

It is trivial to use multiple cameras and choose between them using the framegraph for all or part of the scene rendering. We will cover this in a future article.

Next up we have the WireframeMaterial element. This is custom type that wraps up the built-in Material type. Qt3D has a robust and very flexible material system that allows multiple levels of customisation. This caters for different rendering approaches on different platforms or OpenGL versions; allows multiple rendering passes with different state sets; provides mechanisms for overriding of parameters at different levels; and also allows easy switching of shaders — all from C++ or using QML property bindings. Once again, to do this topic justice would require more space than we have here so we will defer it for another time. For now, the take away point is that properties on a Material can easily be mapped through to uniform variables in a GLSL shader program that is itself specified in the referenced effect property.

Supported Shader Stages

Qt3D supports all of the OpenGL programmable rendering pipeline stages: Vertex, tessellation control, tessellation evaluation, geometry and fragment shaders. Compute shaders require a little more API work for getting data into and out of them before they are fully supported.

Instantiating the TrefoilKnot and setting our material on it is simplicity itself. Once we have done that and with the parts we have glossed over, the Qt3D engine in conjunction with the renderer aspect has enough information to finally render our mesh using the material we specified.

Of course we can go further and make things a little more interesting by making use of the animation elements provided by Qt Quick 2. When we animate properties of our custom TrefoilKnot or the Wireframematerial, the properties of its components get updated by means of the usual QML property binding mechanism. For example:

WireframeMaterial {
    id: wireframeMaterial
    effect: WireframeEffect {}
    ambient: Qt.rgba( 0.2, 0.0, 0.0, 1.0 )
    diffuse: Qt.rgba( 0.8, 0.0, 0.0, 1.0 )

    QQ2.SequentialAnimation {
        loops: QQ2.Animation.Infinite
        running: true

        QQ2.NumberAnimation {
            target: wireframeMaterial;
            property: "lineWidth";
            duration: 1000;
            from: 1.0
            to: 3.0
        }

        QQ2.NumberAnimation {
            target: wireframeMaterial;
            property: "lineWidth";
            duration: 1000;
            from: 3.0
            to: 1.0
        }

        QQ2.PauseAnimation{ duration: 1500 }
    }
}

The property updates are noticed by the QNode base class and are automatically sent through to the corresponding objects in the renderer aspect. The renderer then takes care of translating the property updates through to new values for uniform variables in the GLSL shader programs. You can find the full source code for this example in the Qt 5 git repository (see below) and when you run it gives the following view of a trefoil knot with the width of the wireframe lines pulsing. All the heavy lifting is being done by the GPU of course. All the CPU has to do is the property animations and the little bit of work to translate the scenegraph and framegraph into raw OpenGL calls.

Even More Win?

In the future, even the animations will be able to be performed across multiple cores by providing a specialised animation aspect. It is also already possible to animate on the GPU via a custom shader program and material.

What Is the Status of Qt3D?

As of December 2014, most of the core framework of Qt3D is now in place. There are a few areas that we want to tidy up and extend before release to make it easier for users to extend. For the renderer aspect, most of the features for an initial release are working. We have just finished implementing the first pass of using data gathered from entities in the scenegraph to populate Uniform Buffer Objects (UBOs) that can be bound to OpenGL shader programs to make large amounts of data readily available. Typical use cases for UBOs are for sets of material or lighting parameters but they can be used for anything you can think of.

The two big things that we have yet to implement for the renderer aspect are:

  • Support for instanced rendering. Instancing is a way of getting the GPU to draw many copies (instances) of a base object that varies in some way for each copy. Often, in position, orientation, colour, material properties, scale etc. Our plan is to provide an API similar to Qt Quick’s Repeater element. In this case the delegate will be the base object and the model will provide the per-instance data. So whereas an entity with a Mesh component attached eventually gets transformed into a call to glDrawElements, an entity with a instanced component will be translated into a call to glDrawElementsInstanced.
  • Qt Quick 2 Integration. There are a number of ways in which Qt Quick 2 could be integrated with Qt3D (or vice versa). For example, you may simply wish to embed a Qt3D scene into a custom Qt Quick 2 item to put it into your UI. Alternatively, you may want to overlay a Qt Quick 2 scene as a UI over your Qt3D scene. You may also want to be able to use Qt Quick 2 to render into a texture and then use that texture within your Qt3D scene, perhaps to apply it to some geometry such as a sign post. With a custom Qt Quick 2 item based on QQuickFramebufferObject and by making use of QQuickRenderControl all of these options should be possible.

In addition, we have yet to implement a sane set of default materials that are ready to use out of the box. The materials example that is included in the Qt3D examples shows a good selection of what some defaults may look like. They need making a little more generic and testing on other platforms, particularly OpenGL ES 2 and ES 3.

Beyond the renderer, the other aspect to be shipped when we release Qt3D 2.0 will be the keyboard and mouse input aspect. Support for keyboard input is already implemented and is usable. Mouse support will come in the New Year. For now we have the hacked together solution mentioned in the above wireframe example for controlling the camera.

Qt3D API

Please note that the Qt3D API is not yet frozen. The API will change before release, but hopefully not by much.

What Can You Do To Help Qt3D?

So far Qt3D 2.0 has been almost entirely designed and implemented by KDAB engineers. I would like to highlight the efforts of Paul Lemire, James Turner, Kevin Ottens, Giuseppe D’Angelo and Milian Wolff who have done a huge amount of work to rebuild Qt3D from the ground up. A lot of work has gone into Qt3D, much of it not visible, in the form of prototypes that were discarded and never saw the light of day, API reviews, testing, debugging, and profiling. This has resulted in over 1200 commits since we moved development onto the public Qt git repositories.

Most of the work in rewriting Qt3D has been funded by our employer, KDAB, and also in the spare time of the above people. Recently we were fortunate enough to get some external funding from our friends at eCortex to help implement some of the missing functionality of Qt3D. This was a fantastic boost for us, because it allowed KDAB to have Paul Lemire focus primarily on Qt3D for an extended period without distraction in addition to facilitating an incredibly helpful API review.

Funding or Using Qt3D

If you wish to help contribute to Qt3D (or any other part of Qt), but you don’t have the time or resources to write patches or if you just wish to invest in Qt3D with some R&D money you have left over, then please do consider funding us to do work in Qt3D on your behalf. Also, the best way to drive new features is to use a technology in the real world, so if you want to use Qt3D (or any other part of Qt) in your next project then please get in touch with us.

If you want to get involved directly with Qt3D or if you just want to try it out, then take a look at how to build Qt 5 from source and drop in to the #qt-3d channel on freenode. You will find a bunch of us in there most of the time. If you need help getting up to speed or if you want something to work on or need some guidance around the architecture please feel free to ping us in there or on the development mailing list. Please use the dev branch to build Qt3D.

We hope to release Qt3D 2.0 along with Qt 5.5.0 in the spring, but the more help or funding we can get with implementation, documentation, testing, and examples the better shape Qt3D will be in for everybody. So thank you to everybody that has provided help and feedback so far. We are happy with the direction Qt3D is going in and we are really looking forwards to an initial release and many more releases in the future.

About KDAB

KDAB is a consulting company dedicated to Qt and offering a wide variety of services and providing training courses in:

KDAB believes that it is critical for our business to invest into Qt3D and Qt, in general, to keep pushing the technology forward and to ensure it remains competitive. Unlike The Qt Company, we are solely focused on consultancy and training and do not sell licenses.

The post Overview of Qt3D 2.0 – Part 2 appeared first on KDAB.

2014 Year in Review

As the year draws to a close, I thought it would be good to take a look back at some of the major events of the Qt world in 2014.

Qt is now on a regular schedule of two major releases per year. We saw Qt 5.3.0 (1) come out in May and Qt 5.4.0 (2) in December. More minor releases occur, as needed, between the major releases.

2014 and the Expanding Internet of Things

During 2014, the user experience (UX) group at ICS worked on our usual fare of mobile and desktop apps, but we also saw a large expansion of embedded device projects that fall into three categories: kiosk information systems, in-vehicle infotainment systems (IVI) and robotics control systems. Each area presents unique and complex challenges for a UX designer. However, we noticed some common requests between all three of those areas: the preference for touchscreens and the desire to be connected to the Internet.

Meteor and Qt: match made in heaven

Meteor, say hello to cross-platform native app-development. Qt, say hello to a modern, reactive web back-end.

TL;DR You can write a native Qt / QML app and have Meteor as the back-end for real-time data distribution and remote procedures.


I've been looking at recent developments and since Meteor had it's 1.0 release recently, decided to take it for a spin... with Qt.

For those that have been living under a rock, Meteor is a node.js based web framework that can be used to create modern, responsive web apps. It does have Cordova/Phonegap support, so you can package your website as an app and deploy to Android or iOS, but obviously that leaves you short when it comes to truly native look and feel, performance, and full range of APIs.

Qt on the other hand is a native cross-platform application development platform and has an excellent track record when it comes to the sheer number of supported native platforms. It does have some server/cloud oriented functionality and services (see engin.io and https://qtcloudservices.com/ in general), but it's hard to beat the popularity and ecosystem (and therefore development speed) happening around Meteor.

Can we make this a match? Meteor for web and server-side, and Qt/QML for making snappy native applications that go beyond calling RESTful APIs and polling on one side, and packaging node.js and phonegap API/performance limitations on the other?

The answer is luckily yes - what makes Meteor so uniquely well positioned for this is it's Data Distribution Protocol, or DDP for short. It's a websocket-based channel where Meteor sends JSON messages to the clients. There is nothing special about this, so we can have a Qt client (turns out there is already a healthy list of DDP clients/libs, but none C++ or Qt-oriented).

Enter Qondrite, a lightweight QML wrapper for Asteroid, a Javascript client for Meteor. With Qondrite, you can use Meteor as the source of your models (yes, everything shown goes through models, not hardcoded lists and data). To check the feasibility, I took the Todo example from Meteor (meteor create --example todos) and used Qondrite to interface to the meteor application. As usual, the source of this demo is available on github.

The end result?




As promised, an integrated Meteor-Qt stack, with a whole lot of opportunity to take the idea even furher. For example, there is no reason why we couldn't send QML instead of Meteor's Blaze HTML templates, making fully dynamic UIs for native applications - how awesome would that be?

This makes Meteor super-well suited for multiplayer games, chat applications and any type of app that benefit from a scalable, client-agnostic back-end, and low-latency, high-performance on the client side.

Feel free to discuss at Hacker news or on meteor-talk !

Two more Qt Champions for 2014


QtChampion_logo_72dpi_RGB_colorThe year is coming to an end, but Qt Champions continue to make Qt better for everyone!

I’d like to welcome two more Qt Champions for 2014, Robin Burchell and Dyami Caliri!

Dyami is a professional Qt developer, but has started contributing to Qt itself very actively in 2014, and thus will be awarded the Qt Champion title in the category of ‘Rookie of the Year’.  You can find Dyami’s code in the Qt base and serial port implementations, not easy places to get started on.

Robin (or w00t for those of you on IRC) is a more familiar name to many Qt contributors, during the past years he has worked on quite a lot of things in Qt. This year he has made an impact in Qt Wayland among other things. Robin is being awarded the special title of Maverick, as he is someone who does not always go by the book, but will get the job done.

Robin and Dyami will be getting their customised prizes and a one year Qt professional license. Please join me in congratulating our new Qt Champions!

With these two awards, we will be closing Qt Champions for 2014. More champion titles will be awarded in the Autumn of 2015, when we will see who has amazed us then.

KDAB contributions to Qt 5.4

Qt 5.4 was released just last week! The new release comes right on schedule (following the 6-months development cycle of the Qt 5 series), and brings a huge number of new features.

KDAB engineers have contributed lots of code to Qt during the last few months. Once more, KDAB is the second largest contributor to Qt (the first being The Qt Company itself). The commit stream has been constant, as you can see in this graph showing the last 16 weeks:

Contributions to Qt by employer (excluding Digia), from here</a>

Contributions to Qt by employer (excluding Digia), from here

In this blog post I’ll show some of the notable features developed by KDAB engineers that you are going to find in Qt 5.4.

Qt WebChannel

The Qt WebChannel module has been introduced to fill the gap left when Qt switched from WebKit 1 (QWebView) to WebKit 2 (WebView in QML) and Blink (QtWebEngine). That is: the possibility of having QObject instances exposed in the JavaScript environment of a loaded page.

The ultimate cause of this lost feature is that modern HTML engines employ multiple processes for performance and security reasons, and therefore the same kind of deep integration that the WebKit Bridge made possible was not available any more.

The WebChannel brings back this awesome functionality, and it extends it even further — it is now possible to export QObjects to any remote browser, that is, not only the WebViews owned by the very same Qt application.

For more information, please refer to this blog post by my colleague Milian Wolff.

The Declarative State Machine Framework

Back in the day, Qt 4.6 introduced a State Machine Framework based on SCXML. It consisted of a few C++ classes to build state machines out of individual states and transitions, and had quite a nice feature set: it supported parallel states, final states, history states, guarded transitions and so on.

Unfortunately, writing state machines by hand requires a lot of boilerplate C++ code. For instance, in order to add a transition to a new state, we must create a new QState object, create a new transition object, and finally add the transition.

Ford Motor Company, in a technical partnership with KDAB, has generously contributed high-level QML wrappers for the C++ state machine. The new technology goes under the name of Declarative State Machine Framework; it uses QML as a Domain Specific Language (DSL) for writing declarative state machines, while being backed by the full C++ state machine framework.

DSM allows users to create state machines using QML, therefore removing the need of boilerplate code, resulting in a nice and compact representation for state machines.

What’s more, it also allows removing the imperative bits from the state machine (that is, which properties should be updated when entering a state); it instead enables any given QML element to bind its property to whether the state machine is in a state or not.

Please refer to the module documentation for more information, as well as the short talk by my colleague Kevin Funk from this year’s Qt Developer Days.

QNX co-maintainership

Due to KDAB’s sustained efforts into supporting QNX, we’re very pleased to hear that my colleague Rafael Roquetto has been nominated co-maintainer of the QNX support in Qt. He’s going to join the ranks with our colleague Bogdan Vatra, maintainer of the Android support. Congratulations, Rafael!

Other contributions

In no particular order:

qmllint

qmllint is a syntax checker for QML files. You can therefore run it on your QML files before shipping them with your application, or add a qmllint step to your CI system / SCM hooks. If you want to know more, please refer to this blog post by my colleague Sérgio Martins.

New hooking system for tooling

Qt has always had a number of private entry points that were supposed to be used by debugging/profiling tools. Unfortunately, due to aggressive compiler optimizations, those hooks were almost always compiled out in release builds, and therefore their usage on any platform but Linux/GCC was extremely problematic.

KDAB’s Senior Engineer Volker Krause developed a solution for this problem, which can be found in Qt 5.4.

The main user of this feature is of course GammaRay, one of the KDAB’s flagship products. GammaRay is a free software solution that provides high level debugging for Qt, allowing the developer to inspect individual subsystems of any complex application.

Lots of bugfixes

Working on real projects, we do know that code does not always behave as advertised. At the same time, we strive to make Qt the best product for cross-platform, high-performance application development.

Therefore, it should not be a surprise that KDAB engineers fixed over 50 reported bugs between Qt 5.3 and Qt 5.4 (and, of course, fixed even more problems which didn’t even have an associated bug report!). Only a few weeks ago KDAB launched a new service, FixMyQtBug, to help companies building products using Qt which are struggling against upstream bugs; our skills and dedication show that we indeed are the experts when it comes to Qt, and we are willing to fix your Qt bugs as well.

The post KDAB contributions to Qt 5.4 appeared first on KDAB.

Overview of Qt3D 2.0 – Part 1

Introduction

Back in the days when Qt was owned by Nokia, a development team in Brisbane had the idea of making it easy to incorporate 3D content into Qt applications. This happened around the time of the introduction of the QML language and technology stack, and so it was only natural that Qt3D should also have a QML based API in addition to the more traditional C++ interface like other frameworks within Qt.

Qt3D was released alongside Qt 4 and saw only relatively little use before Nokia decided to divest Qt to Digia. During this transition, the Qt development office in Brisbane was closed and unfortunately Qt3D never saw a release alongside Qt 5. This chain of events left the Qt3D code base without a maintainer and left to slowly bit rot.

With OpenGL taking a much more prominent position in Qt 5’s graphical stack — OpenGL is the underpinning of Qt Quick 2’s rendering power — and with OpenGL becoming a much more common part of customer projects, KDAB decided that it would be good for us and for the Qt community at large if we took over maintainership and development of the Qt3D module. To this end, several KDAB engineers have been working hard to bring Qt3D back to life and moreover to make it competitive to other modern 3D frameworks.

This article is the first in a series that will cover the capabilities, APIs, and implementation of Qt3D in detail. Future articles will cover how to use the API in various ways from basic to advanced with a series of walked examples. For now, we will begin in this article with a high-level overview of the design goals of Qt3D; some of the challenges we faced; how we have solved them; what remains to be done before we can release Qt3D 2.0; and what the future may bring beyond Qt3D 2.0.

What Should Qt3D Do?

When asked what a 3D framework such as Qt3D should actually do, most people unfamiliar with 3D rendering simply say something along the lines of “I want to be able to draw 3D shapes and move them around and move the camera”. This is, of course, a sensible baseline, but when pressed further you get back wishes that typically include the following kinds of things:

  • 2D and 3D
  • Meshes
  • Materials
  • Shadows

Then, when you move on and ask the next target group, those who already know about the intricacies of 3D rendering, you get back some more technical terms such as:

That is already a fairly complex set of feature requests, but the real killer is that last entry which translates into ‘I want to be able to configure the renderer in ways you haven’t thought of’. Given that Qt3D 1.0 offered both C++ and QML APIs this is something that we wished to continue to support, but when taken together with wanting to have a fully configurable renderer this led to quite a challenge. In the end, this has resulted in something called the framegraph.

Framegraph vs Scenegraph

A scenegraph is a data-driven description of what to render.
The framegraph is a data-driven description of how to render.

Using a data-driven description in the framegraph allows us to choose between a simple forward renderer; or including a z-fill pass; or using a deferred renderer; when to render any transparent objects etc. Also, since this is all configured purely from data, it is very easy to modify even dynamically at runtime. All without touching any C++ code at all!

Once you move beyond the essentials of getting some 3D content on to the screen, it becomes apparent that people also want to do a lot of other things related to the 3D objects. The list is extensive and wide ranging but very often includes requests like:

This is obviously a tall order, and one that we couldn’t possibly hope to satisfy out of the box with the limited resources available. However, it is clear, that in order to support these features in the future, we needed to do some ground work now to architect Qt3D 2.0 to be extensible and flexible enough to act as a host for such extensions. The work around this topic took a lot of effort and several aborted prototypes before we settled on the current design. We will introduce the resulting architecture later and then cover it in more detail in an upcoming article.

Beyond the above short and long term feature goals, we also wanted to make Qt3D perform well and scale up with the number of available CPU cores. This is important given how modern hardware is improving performance — by increasing the numbers of cores rather than base clock speed. Also, when analysing the above features we can intuitively hope that utilising multiple cores will work quite naturally since many tasks are independent of each other. For example, the operations performed by a path finding module will not overlap strongly with the tasks performed by a renderer (except maybe for rendering some debug info or statistics).

Overview of the Qt3D 2.0 Architecture

The above set of requirements turned out to be quite a thorny problem, or rather a whole set of them. Fortunately, we think we have found solutions to most of them and the remaining challenges look achievable.

For the purposes of discussion, let’s start at the high-level and consider how to implement a framework that is extensible enough to deal with not just rendering but also all of the other features plus more that we haven’t though of.

At its heart, Qt3D is all about simulating objects in near-realtime, and then very likely then rendering the state of those objects onto the screen somehow. Let’s break that down and start with asking the question: ‘What do we mean by an object?’

Space-invadersOf course in such a simulation system there are likely to be numerous types of object. If we consider a concrete example this will help to shed some light on the kinds of objects we may see. Let’s consider something simple, like a game of Space Invaders. Of course, real-world systems are likely to be much more complex but this will suffice to highlight some issues. Let’s begin by enumerating some typical object types that might be found in an implementation of Space Invaders:

  • The player’s ground cannon
  • The ground
  • The defensive blocks
  • The enemy space invader ships
  • The enemy boss flying saucer
  • Bullets shot from enemies and the player

In a traditional C++ design these types of object would very likely end up implemented as classes arranged in some kind of inheritance tree. Various branches in the inheritance tree may add additional functionality to the root class for features such as: “accepts user input”; “plays a sound”; “can be animated”; “collides with other objects”; “needs to be drawn on screen”.

I’m sure you can classify the types in our Space Invaders example against these pieces of functionality. However, designing an elegant inheritance tree for even such a simple example is not easy.

This approach and other variations on inheritance have a number of problems as we will discuss in a future article but includes:

  • Deep and wide inheritance hierarchies are difficult to understand, maintain and extend.
  • The inheritance taxonomy is set in stone at compile time.
  • Each level in the class inheritance tree can only classify upon a single criteria or axis.
  • Shared functionality tends to ‘bubble up’ the class hierarchy over time.
  • As library designers we can’t ever know all the things our users will want to do.

Anybody that has worked with deep and wide inheritance trees is likely to have found that unless you understand, and agree with, the taxonomy used by the original author, it can be difficult to extend them without having to resort to some ugly hacks to bend classes to our will.

For Qt3D, we have decided to largely forego inheritance and instead focus on aggregation as the means of imparting functionality onto an instance of an object. Specifically, for Qt3D we are using an Entity Component System (ECS). There are several possible implementation approaches for ECSs and we will discuss Qt3D’s implementation in detail in a later article but here’s a very brief overview to give you a flavour.

An Entity represents a simulated object but by itself is devoid of any specific behaviour or characteristics. Additional behaviour can be grafted on to an entity by having the entity aggregate one or more Components. A component is a vertical slice of behaviour of an object type.

What does that mean? Well, it means that a component is some piece of behaviour or functionality in the vein of those we described for the objects in our Space Invaders example. The ground in that example would be an Entity with a Component attached that tells the system that it needs rendering and how to render it; An enemy space invader would be an Entity with Components attached that cause them to be rendered (like the ground), but also that they emit sounds, can be collided with, are animated and are controlled by a simple AI; The player object would have mostly similar components to the enemy space invader, except that it would not have the AI component and in its place would have an input component to allow the player to move the object around and to fire bullets.

ecs-2

On the back-end of Qt3D we implement the System part of the ECS paradigm in the form of so-called Aspects. An aspect implements the particular vertical slice of functionality imbued to entities by a combination of one or more of their aggregated components. As a concrete example of this, the renderer aspect, looks for entities that have mesh, material and optionally transformation components. If it finds such an entity, the renderer knows how to take that data and draw something nice from it. If an entity doesn’t have those components then the renderer aspect ignores it.

Qt3D is an Entity-Component-System

Qt3D builds custom Entities by aggregating Components that impart additional capabilities. The Qt3D engine uses Aspects to process and update entities with specific components.

Similarly, a physics aspect would look for entities that have some kind of collision volume component and another component that specifies other properties needed by such simulations like mass, coefficient of friction etc. An entity that emits sound would have a component that says it is a sound emitter along with when and which sounds to play.

A very nice feature of the ECS is that because they use aggregation rather than inheritance, we can dynamically change how an object behaves at runtime simply by adding or removing components. Want your player to suddenly be able to run through walls after gobbling a power-up? No problem. Just temporarily remove that entity’s collision volume component. Then when the power-up times out, add the collision volume back in again. There is no need to make a special one-off subclass for PlayerThatCanSometimesWalkThroughWalls.

Hopefully that gives enough of an indication of the flexibility of Entity Component Systems to let you see why we chose it as the basis of the architecture in Qt3D. Within Qt3D the ECS is implemented according to the following simple class hierarchy.

Qt3D’s ‘base class’ is QNode which is a very simple subclass of QObject. QNode adds to QObject the ability to automatically communicate property changes through to aspects and also an ID that is unique throughout the application. As we will see in a future article, the aspects live and work in additional threads and QNode massively simplifies the tasks of getting data between the user-facing objects and the aspects. Typically, subclasses of QNode provide additional supporting data that is then referenced by components. For example a QShaderProgram specifies the GLSL code to be used when rendering a set of entities.

ecs-1

Components in Qt3D are implemented by subclassing QComponent and adding in any data necessary for the corresponding aspect to do its work. For example, the Mesh component is used by the renderer aspect to retrieve the per-vertex data that should be sent down the OpenGL pipeline.

Finally, QEntity is simply an object that can aggregate zero or more QComponent’s as described above.

To add a brand new piece of functionality to Qt3D, either as part of Qt or specific to your own applications, and which can take advantage of the multi-threaded back-end consists of:

  • Identify and implement any needed components and supporting data
  • Register those components with the QML engine (only if you wish to use the QML API)
  • Subclass QAbstractAspect and implement your subsystems functionality.

Of course anything sounds easy when you say it fast enough, but after implementing the renderer aspect and also doing some investigations into additional aspects we’re pretty confident that this makes for a flexible and extensible API that, so far, satisfies the requirements of Qt3D.

Qt3D has a Task-Based Engine

Aspects in Qt3D get asked each frame for a set of tasks to execute along with dependencies between them. The tasks are distributed across all configured cores by a scheduler for improved performance.

Summary

We have seen that the needs of Qt3D extend far beyond implementing a simple forward-renderer exposed to QML. Rather, what is needed is a fully configurable renderer that allows to quickly implement any rendering pipeline that you need. Furthermore, Qt3D also provides a generic framework for near-realtime simulations beyond rendering. Qt3D is cleanly separated into a core and any number of aspects that can implement any functionality they wish. The aspects interact with components and entities to provide some slice of functionality. Examples of future possible aspects include: physics, audio, collision, AI, path finding.

In the next part of this series, we shall demonstrate how to use Qt3D and the renderer aspect to produce a custom shaded object and how to make it animate all from within QML.

trefoil-wireframe

About KDAB

KDAB is a consulting company dedicated to Qt and offering a wide variety of services and providing training courses in:

KDAB believes that it is critical for our business to invest into Qt3D and Qt, in general, to keep pushing the technology forward and to ensure it remains competitive. Unlike The Qt Company, we are solely focused on consultancy and training and do not sell licenses.

The post Overview of Qt3D 2.0 – Part 1 appeared first on KDAB.

Rendering PDF Content with XpdfWidget

A consulting project I worked on recently needed to display an interactive PDF document in the style of Adobe Reader on a touchscreen device running embedded Linux using Qt and QML. I have been working with Qt for nearly ten years and had not come across this requirement before, so of course I turned to the Internet to see what was available and I came across this page, which lists all options available for dealing with PDF files from Qt.

Swift and Isode

Having been working on this behind the scenes for a while, we’ve got some good news. After years of quietly supporting Swift, Isode are now taking Swift formally into their product set. This means more developers working on Swift and the opportunity for more rapid development and advancement of the projects. In practical terms, we think the only obvious change externally is likely to be an increase in activity in the commit logs and improvements to the software, both of which have been becoming increasingly obvious in recent months as Isode’s been increasing support.

Some details about which you may care:

  • Isode’s a long-term producer of messaging and directory servers, including the M-Link XMPP server, has been the provider of commercial licenses for Swiften, was responsible for the port of Swiften to Java in the form of Stroke and is where Kev works for his day-job.
  • Kev’s going to manage the Swift projects within Isode.
  • Swift, Stroke et al. will be remaining open source, with commercial licensing and support available.
  • The project will continue to run in much the same way, with public code review systems etc.
  • We’ll still be accepting community-supplied patches
  • We’ll still be encouraging testing and feedback by the community
  • We think this is going to be an opportunity to make Swift better

We hope you’ll join us in our excitement for Swift’s future.

Qt 5.4 released


I am happy to announce that Qt 5.4 has been released today and is available for download from qt.io. Together with Qt 5.4, we have also released Qt Creator 3.3 and an update to Qt for device creation on embedded Linux and embedded Android.

But let’s start with Qt 5.4. One of the main focus areas of this Qt release has been around Web technologies and we have a lot of cool new things to offer there.

Renewed Web Story

HTML5 and Web technologies have become more and more important over the last years, and we have spent the last year developing a completely renewed Web offering for Qt. The Qt WebEngine module is the result of a long-term R&D project where we adopted the Chromium Web engine for use within Qt. With Qt 5.4, it is fully supported on the most used desktop and embedded platforms. Qt WebEngine provides you with an easy-to-use API to embed Web content in both Qt Widgets and Qt Quick based applications.

The new Qt WebChannel module provides a simple-to-use bridge between QML/C++ and HTML/Javascript. This enables the creation of hybrid applications that use both Qt and Web technologies. Communication between both sides happens by exposing QObjects in the Web context. The module works not only with Qt WebEngine, but also with any other browser engine that has support for Web sockets.

As a third component, Qt 5.4 introduces a Technology Preview of a new module called Qt WebView. The Qt WebView module offers a more limited API to embed the web browser that is native to the underlying operating system for use cases where the full Qt WebEngine isn’t needed, or where it can’t be used because of restrictions coming from the underlying OS. In Qt 5.4, the Qt WebView module supports iOS and Android.

Together with the Qt WebSockets module introduced in Qt 5.3, Qt now has great support for many of the latest Web technologies and makes interacting with Web content very easy. Qt WebEngine and Qt WebView make it very easy to embed HTML5, Qt WebChannel creates the communication channel between Qt and HTML5 that is required for hybrid applications, and Qt WebSockets allows for an easy communication between Qt and many Web services.

Qt 5.4 also still contains the older Qt WebKit module. Qt WebKit is still supported, but as of Qt 5.4 we consider it done, so no new functionality will be added to it. We are also planning to deprecate Qt WebKit in future releases, as the new Qt WebEngine provides what is needed. In most use cases, migrating from Qt WebKit to Qt WebEngine is rather straightforward. If you are starting a new project that requires web capabilities, we advise that you already start using Qt WebEngine.

Qt for WinRT | Completing our Cross-Platform Offering

The second big new feature of Qt 5.4 is the completion of our cross-platform story with the full support for Qt on Windows Runtime. Qt for Windows Runtime was already added as a supported Beta to Qt 5.3, and has now reached the state where it is a fully supported part of Qt. With Qt for Windows Runtime, you can create applications for the Windows Store, targeting both Windows Phone 8.1 and above as well as Windows 8.1 and newer.

This port completes our cross-platform story and we feel that Qt now supports all currently relevant desktop, embedded and mobile operating systems.

Graphics updates

Qt 5.4 brings also a lot of other new features and improvements. One focus are has been around graphics. With Qt 5.4, we now introduce better support for high-resolution displays for our desktop platforms. The support is still considered experimental in Qt 5.4, if you are interested, check out the overview documentation.

OpenGL support on Windows has been problematic in a few cases, since there aren’t always good drivers available. To help with this problem, Qt now has the capability to dynamically select the OpenGL implementation that is being used at application start-up time. Qt will now choose between using the native OpenGL driver, the ANGLE’s OpenGL ES 2.0 implementation that translates to DirectX or a pure Software rasterizer.

Qt Data Visualization has been updated to version 1.2 including additional features such as volume rendering and texture support for surface graphs and performance improvements. Qt Charts has now been updated to version 2.0 including better Qt 5 modularization, binary packages and minor improvements.

Other improvements on the graphics side is the new QOpenGLWidget class that replaces the old QGLWidget class from Qt 4 and allows us to deprecate the old Qt OpenGL module as all relevant functionality can now be found in Qt Gui. QOpenGLContext can now wrap existing native contexts. You can use the new QQuickRenderControl to render Qt Quick scenes into an offscreen buffer. For more details check out this blog post.

Finally, Qt 5.4 contains a technology preview of our new Qt Canvas3D module, that implements a WebGL like API for Qt Quick. This module makes it very easy to use Javascript code using WebGL within Qt Quick.

We have so many new things in Qt 5.4 that we have to list them all. Before you keep moving down the blog, check out our Qt 5.4 highlights video.

 

Other new features

A large amount of other new features have also found their way into Qt 5.4. I’ll just mention a few of them in this blog post.

Qt now supports Bluetooth Low Energy on Linux using BlueZ. Support for other platforms will come in later versions of Qt. Bluetooth LE makes it possible to communicate with many modern Bluetooth devices such as e.g. wearables.

On Android we now have native looking Qt Quick Controls, as well as smaller deployment packages and faster application startup times. For iOS and Mac OS X, we have now support for the latest operating system versions, XCode 6 and the new code signing style required to push applications into the App Store. We especially worked hard to fix all issues related to the new style on Mac OS X 10.10.

Qt Qml comes now with support for Qt State Machines through the new QtQml.StateMachine import, and Qt Core has gained a new QStorageInfo class giving you information about mounted devices and volumes.

Qt Quick Controls now also come with a brand new and great looking ‘flat style’ that can be used on all platforms.

Qt 5.4, also comes with a brand new version of Qt Creator, Qt Creator 3.3. For details on all the new things in there check out our separate blog post.

Qt for device creation

Today, we also release a new version of our development package for device creation. Here are some of the new features that are included in this release:

We now have preliminary support to run Qt Applications on Wayland using the Weston compositor on i.MX6 based devices, including support for Video and Qt WebEngine.

We added a new B2Qt Utils module that gives easy access to device-specific settings such as the display backlight, hostname or power state from both C++ and QML. The B2Qt Wi-Fi module is now officially supported and makes it easy to configure your Wi-Fi network.

Apart from these new features we have added a large amount of improvements:

  • eAndroid Qt Multimedia plugin update.
    • The implementation of Qt Multimedia for embedded Android has been refactored, resulting in a cleanly separated and easier maintained plugin for that platform.
  • SD Card Flashing Wizard for easier b2qt image deployment
    • Simple wizard for writing system image to SD card
    • Integrated into Qt Creator
  • BYOS (Build Your Own Stack) Improvements
    • Improved scripts for initializing and managing the Yocto build environment: Using repo tool for managing the numerous meta repositories needed for different devices.
  • eLinux: Camera support for i.MX6 devices
    • All necessary GStreamer plugins for using camera in Qt Quick applications are now integrated into reference device images
    • MIPI camera support added

With this version, we have also added new hardware reference platforms, including a low-end profile for the GPU-less Freescale Vybrid. The complete list of reference hardware supported by Qt for device creation can be found in the documentation.

Qt Quick without OpenGL

Another great new feature for our embedded customers is the new Qt Quick 2D Renderer module. This new commercial add-on allows using Qt Quick on embedded devices that have no OpenGL hardware acceleration. The new Qt Quick2DRenderer module can render most of Qt Quick using pure software rasterization or 2D hardware acceleration through e.g. DirectFB or Direct2D. The module support all of Qt Quick with the exception of OpenGL shaders and particles.

This enables the creation of Qt Quick based user interfaces with a modern look and feel on lower end devices than before. In addition, the ability to use the Qt Quick API across a device portfolio spanning devices both with and without OpenGL significantly reduces the amount of UI code you need to write and maintain. To showcase the Qt Quick 2D Renderer’s capabilities, we have added the Toradex Colibri VF50 and VF61 devices as new reference hardware to the Boot to Qt software stack, demonstrating our ability to run on the Freescale Vybrid SoCs.

Introduction of LGPL v3

As announced earlier, the open-source version for Qt 5.4 is also made available under the LGPLv3 license. The new licensing option allows us at The Qt Company to introduce more value-add components for the whole Qt ecosystem without making compromises on the business side. It also helps to protect 3rd party developers’ freedom from consumer device lock-down and prevents Tivoization as well as other misuse.

In Qt 5.4, a few modules are exclusively available under GPL/LGPLv3 or commercial license terms. These modules are the new Qt WebEngine and the Technology Previews ofQt WebView and Qt Canvas 3D. The Android style is only available under a commercial license or the LGPLv3. You can find more details here.

Thanks to the Qt Community

Qt 5.4 adds a lot of new functionality and improvements. Some of them would not have been possible without the help of the great community of companies and people that contribute to Qt and are not employees of The Qt Company.

While I can’t mention everybody here, I would like to still name a few. I’d like to thank our Qt Service Partner KDAB for continuously being the second biggest contributor to Qt, and in this release especially Milian Wolf for his work on Qt WebChannel. I’d also like to thank Orgad Shaneh from Audiocodes for his continuous help on and involvement with Qt Creator and Thiago Macieira from Intel for his long-term involvement. I’d also like to mention Brett Stottlemyer from Ford for contributing the new QML State Machine Framework and Ivan Komissarov for the new QStorageInfo class.

Make sure to try Qt 5.4, www.qt.io/download. Enjoy!

Qt Creator 3.3.0 released


We are happy to announce the Qt Creator 3.3.0 release today. This release comes with a great set of new features as well as a big amount of bug fixes.

I talked about many of the new features and improvements already in the beta release blog post. For today’s release, Alessandro locked himself in his office for a while and created this “What’s new in Qt Creator 3.3″ video!

Other features include support for the Gradle build system for Android development, a refactoring action for adopting the new connect style in Qt 5, BareMetal support for CMake projects, and an option to use the Qt Quick Compiler for your Qmake based QML projects. Please also see our change log for a more complete list of changes.

For users of the Professional or Enterprise edition, we added experimental support for running the Clang Static Analyzer on your projects, as a new tool in Analyze mode. The scene graph events category in the QML Profiler has been significantly improved and will now visualize the time ranges of all scene graph related events instead of showing them as a list of numbers. You can also see input events in the QML profiler now, in a separate category. In Qt Quick Designer we added direct editing of TabViews, and additional checks for form files (.ui.qml) as well as buttons for exporting form items for use in the implementation files.

Qt Creator 3.3.0 is part of the installers for Qt 5.4.0, which is also released today.

Both are now available for download on qt.io. Please post issues in our bug tracker. You also can find us on IRC on #qt-creator on irc.freenode.net, and on the Qt Creator mailing list.

Note: With Qt Creator 3.3 we drop support for compiling Qt Creator with Qt 4. The minimal required Qt version to compile Qt Creator itself is currently Qt 5.3.1. This does not affect compilation of your own projects, of course. We still support development of Qt 4-based applications with Qt Creator. If you want to use custom designer plugins in Qt Creator, you must make them compilable with Qt 5 as well, though.

Multi-process embedded systems with Qt for Device Creation and Wayland


With the Qt 5.4 update for Qt for Device Creation it is now possible – on certain embedded systems – to run Qt applications on top of a Wayland compositor by relying only on the provided reference images without any additional modifications. While the stable and supported approach remains eglfs, the lightweight platform plugin that allows running fullscreen Qt applications on top of EGL and fbdev with the best possible performance, those who do not require any of the enhanced tooling but need to have multiple GUI applications running on the same screen can start experimenting with a Wayland-based system already today.

In this post we will take a look how this can be done on i.MX6 based systems, like for example the Sabre SD and BD-SL-i.MX6 boards.

The wayland platform plugin provided by the Qt Wayland module is now an official part of Qt 5.4.0. Whenever the necessary dependencies, like the wayland client libraries and the wayland-scanner utility, are available in the sysroot, the platform plugin will be built together with the rest of Qt. In the toolchains and the ready-to-be-flashed reference images for Sabre Lite and Sabre SD everything is in place already. They contain Wayland and Weston 1.4.0, based on Yocto’s recipes (daisy release).

We will use Weston as our compositor. This provides a desktop-like experience out of the box. Those looking for a more customized experience should look no further than the Qt Compositor libaries of the Qt Wayland module which provide the building blocks for easily creating custom compositors with Qt and QML. These components are still under development so stay tuned for more news regarding them in the future.

WP_20141201_012
Video playback and some other applications running on a Sabre SD board

Now let’s see what it takes to run a compositor and our Qt applications on top of it on an actual device. It is important to note that there is no tooling support for such a setup at the moment. This means that deploying and debugging from Qt Creator will likely not function as expected. Some functionality, like touch input and the Qt Virtual Keyboard, will function in a limited manner. For example, the virtual keyboard will appear on a per-application, per-window basis instead of being global to the entire screen. Support for such features will be improved in future releases. For the time being performance and stability may also not be on par with the standard single-process offering. On the positive side, advanced features like accelerated video playback and Qt WebEngine are already functional.

  • Qt Enterprise Embedded’s reference images will launch a Qt application upon boot. This is either /usr/bin/qtlauncher, containing various demos, or the user’s custom application deployed previously via Qt Creator. The launch and lifetime of these applications is managed by the appcontroller utility. To kill the currently running application, log in to the device via adb (adb shell) or ssh (ssh root@device_ip) and run appcontroller –stop.
  • Now we can launch a compositor. For now this will be Weston. The environment variable XDG_RUNTIME_DIR may not be set so we need to take care of that first: export XDG_RUNTIME_DIR=/var/run followed by weston –tty=1 –use-gal2d=1 &. The –use-gal2d=1 option makes Weston perform compositing via Vivante’s hardware compositing APIs and the GC320 composition core instead of drawing textured quads via OpenGL ES.
  • Once the “desktop” has appeared, we are ready to launch clients. The default Qt platform plugin is eglfs, this has to be changed either via the QT_QPA_PLATFORM environment variable or by passing -platform wayland to the applications. The former is better in our case because we can then continue to use appcontroller to launch our apps. Let’s run export QT_QPA_PLATFORM=wayland followed by appcontroller –launch qtlauncher. The –launch option disables some of the tooling support and will make sure a subsequent application launch via appcontroller will not terminate the previous application, as is the case with the default, eglfs-based, single GUI process system.
  • At this point a decorated window should appear on the screen, with the familiar demo application running inside. If the window frame and the title bar are not necessary, performance can be improved greatly by disabling such decorations altogether: just do export QT_WAYLAND_DISABLE_WINDOWDECORATION=1 before launching the application. Note that the window position can still be changed by connecting a keyboard and mouse, and dragging with the Windows/Command key held down.

To summarize:

appcontroller --stop
export XDG_RUNTIME_DIR=/var/run
weston --tty=1 --use-gal2d=1 &
export QT_QPA_PLATFORM=wayland
appcontroller --launch qtlauncher

and that’s it, we have successfully converted our device from a single GUI app per screen model to a desktop-like, multi-process environment. To see it all in action, check out the following video:

Notes From a UX Pro Over a Cup of Joe: UX and Coffee

Welcome back for a chat about user experience (UX) in the real world. Today, I want to talk about user experience and coffee. Now, I know not everyone drinks coffee, so for the sake of discussion, let’s assume that you want to drink some coffee.

Initial Decision

Qt Weekly #22: How to help Qt Support help you more efficiently


If you have used Qt Support before then you may have had to deal with a bit of back and forth with Support trying to get to the heart of the problem that you are having. Therefore in order to try and speed up that process along in the future we have the following tips for you to help you ensure we get all the information that might be useful to us right away.

Installation issues:
– First check that you have all the dependencies for building Qt as indicated here:

http://qt-project.org/doc/qt-4.8/requirements.html

http://qt-project.org/doc/qt-5/build-sources.html
– If the problem occurs when building Qt from source, then send the entire output from configure as well as the output from the make command so we can see everything that has happeend in the run up to the build issue
– If installing from one of the binary installers then re-run the installer with the –verbose option and send the log file it generates

General qmake issues:
– If the makefile/vcxproj/xcodeproj file is not being created correctly then re-run qmake with the extra options “-d -d -d -d 2>debug.txt” and send the debug.txt file it generates. This contains all the debug information from qmake which may be useful.

License not found or is incorrectly invalid issues:
– Delete the .qt-license file from:

*nix/Mac: $(HOME)/.qt-license
Windows: %USERPROFILE%/.qt-license

and delete the following extra file/registry entry:

*nix: $HOME/.config/Digia/LicenseManager.conf
Mac: $HOME/Library/Preferences/com.digia.LicenseManager.plist
Windows: HKEY_CURRENT_USER\Software\Digia\LicenseManager in the registry

Problems with the application starting up:
– If the application itself fails to start then check that the dependencies are found:

*nix: ldd ./executable
Mac: otool -L ./executable.app/Contents/MacOS/executable
Windows: depends executable.exe

On Windows if you do not have depends already you can get this from www.dependencywalker.com. Send the output to us if problems still occur (in the case of Windows you can save it as a dwi file and send that).

If it is a plugin that is failing to start then run the tool from the same location as the executable on the plugin directly instead. For example “ldd ./platforms/libxcb.so”. In addition run the application with the environment variable:

QT_DEBUG_PLUGINS=1

set and send the output that you get from running the application. On Windows you might need to use DebugView in order to see the output if you don’t have a console window or are not running through a debugger.

If the application is crashing:
– Send the stack trace you get when it crashes (in debug mode) and give us the details surrounding what was happening at the time of the crash. Additionally if an example can be produced to reproduce the problem then this certainly goes a long way.

If you have found a bug:
– Sometimes we need to pass on the example that is created to reproduce the bug by a customer into a public bug report system. Please indicate in that case if it is ok to use the example in a public bug report or not so we can save time having to ask first.

Problems related to the network module:
– If at all possible apply the patches from here:
Qt 4:

--- a/src/network/kernel/qhostinfo_win.cpp
+++ b/src/network/kernel/qhostinfo_win.cpp
@@ -134,8 +134,8 @@ QHostInfo QHostInfoAgent::fromName(const QString &hostName)
     QHostInfo results;

 #if defined(QHOSTINFO_DEBUG)
-    qDebug("QHostInfoAgent::fromName(%p): looking up \"%s\" (IPv6 support is %s)",
-           this, hostName.toLatin1().constData(),
+    qDebug("QHostInfoAgent::fromName(): looking up \"%s\" (IPv6 support is %s)",
+           hostName.toLatin1().constData(),
            (local_getaddrinfo && local_freeaddrinfo) ? "enabled" : "disabled");
 #endif

@@ -248,8 +248,8 @@ QHostInfo QHostInfoAgent::fromName(const QString &hostName)

 #if defined(QHOSTINFO_DEBUG)
     if (results.error() != QHostInfo::NoError) {
-        qDebug("QHostInfoAgent::run(%p): error (%s)",
-               this, results.errorString().toLatin1().constData());
+        qDebug("QHostInfoAgent::run(): error (%s)",
+               results.errorString().toLatin1().constData());
     } else {
         QString tmp;
         QList addresses = results.addresses();
@@ -257,8 +257,8 @@ QHostInfo QHostInfoAgent::fromName(const QString &hostName)
             if (i != 0) tmp += ", ";
             tmp += addresses.at(i).toString();
         }
-        qDebug("QHostInfoAgent::run(%p): found %i entries: {%s}",
-               this, addresses.count(), tmp.toLatin1().constData());
+        qDebug("QHostInfoAgent::run(): found %i entries: {%s}",
+               addresses.count(), tmp.toLatin1().constData());
     }
 #endif
     return results;

--- a/src/network/network.pro
+++ b/src/network/network.pro
@@ -3,13 +3,13 @@
TARGET = QtNetwork
QPRO_PWD = $$PWD
DEFINES += QT_BUILD_NETWORK_LIB QT_NO_USING_NAMESPACE
-#DEFINES += QLOCALSERVER_DEBUG QLOCALSOCKET_DEBUG
-#DEFINES += QNETWORKDISKCACHE_DEBUG
-#DEFINES += QSSLSOCKET_DEBUG
-#DEFINES += QHOSTINFO_DEBUG
-#DEFINES += QABSTRACTSOCKET_DEBUG QNATIVESOCKETENGINE_DEBUG
-#DEFINES += QTCPSOCKETENGINE_DEBUG QTCPSOCKET_DEBUG QTCPSERVER_DEBUG QSSLSOCKET_DEBUG
-#DEFINES += QUDPSOCKET_DEBUG QUDPSERVER_DEBUG
+DEFINES += QLOCALSERVER_DEBUG QLOCALSOCKET_DEBUG
+DEFINES += QNETWORKDISKCACHE_DEBUG
+DEFINES += QSSLSOCKET_DEBUG
+DEFINES += QHOSTINFO_DEBUG
+DEFINES += QABSTRACTSOCKET_DEBUG QNATIVESOCKETENGINE_DEBUG
+DEFINES += QTCPSOCKETENGINE_DEBUG QTCPSOCKET_DEBUG QTCPSERVER_DEBUG QSSLSOCKET_DEBUG
+DEFINES += QUDPSOCKET_DEBUG QUDPSERVER_DEBUG
QT = core
win32-msvc*|win32-icc:QMAKE_LFLAGS += /BASE:0x64000000

Qt 5:

--- a/src/network/network.pro
+++ b/src/network/network.pro
@@ -2,13 +2,13 @@ TARGET = QtNetwork
QT = core-private

DEFINES += QT_NO_USING_NAMESPACE
-#DEFINES += QLOCALSERVER_DEBUG QLOCALSOCKET_DEBUG
-#DEFINES += QNETWORKDISKCACHE_DEBUG
-#DEFINES += QSSLSOCKET_DEBUG
-#DEFINES += QHOSTINFO_DEBUG
-#DEFINES += QABSTRACTSOCKET_DEBUG QNATIVESOCKETENGINE_DEBUG
-#DEFINES += QTCPSOCKETENGINE_DEBUG QTCPSOCKET_DEBUG QTCPSERVER_DEBUG QSSLSOCKET_DEBUG
-#DEFINES += QUDPSOCKET_DEBUG QUDPSERVER_DEBUG
+DEFINES += QLOCALSERVER_DEBUG QLOCALSOCKET_DEBUG
+DEFINES += QNETWORKDISKCACHE_DEBUG
+DEFINES += QSSLSOCKET_DEBUG
+DEFINES += QHOSTINFO_DEBUG
+DEFINES += QABSTRACTSOCKET_DEBUG QNATIVESOCKETENGINE_DEBUG
+DEFINES += QTCPSOCKETENGINE_DEBUG QTCPSOCKET_DEBUG QTCPSERVER_DEBUG QSSLSOCKET_DEBUG
+DEFINES += QUDPSOCKET_DEBUG QUDPSERVER_DEBUG
win32-msvc*|win32-icc:QMAKE_LFLAGS += /BASE:0x64000000

MODULE_PLUGIN_TYPES = \

and then build the network module again and rerun the application. Send us the output that it generates as this will give us a lot of network debug information to look at in case it is useful.

Problems with debugging inside Qt Creator:
– Please include with your support request the contents of the Windows | Views | Debugger Log as this shows all the debugger commands and information.

General tips:
– If you experience a strange crash in your application and you have subclassed Q[Core|Gui]Application then just double check that the constructor’s signature is correct, it should be taking an “int &” and not just an “int” as this can cause problems further down the line although it will compile fine without a warning.

FSFE needs your support for 2015!

“Use, study, share, improve” – these four freedoms are the definition of Free Software for contributors all around the world. The focus of their communities is to produce content and code that can be shared freely, and to have fun and satisfaction on the way. But there is a whole other, non-technical side to the success of Free Software:

  • These freedoms need protection, as they may conflict with the interests of some states and some businesses.
  • These freedoms need explaining, as the benefits they contribute to society and their relation to basic liberties are not always obvious and easy to understand.
  • And these freedoms need organizing, to give the various Free Software communities and contributors one voice where they are usually not heard – for example in capitals, in Brussels, in trade associations, or in research.

The Free Software Foundation Europe does all that, transparently and consistently, so that we don’t have to do it and can concentrate on creating great things. For that, FSFE deserves our support. FSFE is independent and financed by people like you, mostly through donations.

FSFE Logo

For 2015, FSFE is fundraising to secure the budget that finances it’s work:

Free Software Foundation Europe is a pan-European charity, established in 2001 to empower users to control technology. To enable the organisation to intensify its work with the European Commission and to let more people know about Free Software, the FSFE needs another 190,000 Euro for its work in 2015. Next year, the FSFE will push harder than ever to weave software freedom into the fabric of our society.

Donate!

There are multiple ways to take part in this and become a supporter, for example you could sign up as a fellow (like I did). Or your company could become a sponsor. There is also the option for a single, one-off donation. Every small donation helps:

To continue its work in 2015, the FSFE will need 420,000 Euro in total. The organisation has already secured 230,000 Euro thanks to existing sustaining members, regular donations, and merchandise sales. The FSFE requires another 190,000 Euro to underwrite its work in 2015.

FSFE is the one organization in Europe that have software freedom as it’s main focus. If to create general understanding and support for Free Software and Open Standards in politics, business, law and society at large is important to you, please consider supporting this mission in one of the ways described above.


Filed under: Coding, CreativeDestruction, English, FLOSS, KDE, OSS, Qt

Native Android style in Qt 5.4


Qt Quick Controls - Gallery example on Android 4.4

Qt Quick Controls – Gallery example on Android 4.4

As you might have heard, the upcoming Qt 5.4 release delivers a new Android style. This blog post explains what it means in practice for different types of Qt applications.

Qt Widgets

Earlier it has been possible to get native looks for Qt Widgets applications on Android with the help of Ministro, a system wide Qt libraries installer/provider for Android. In Qt 5.4, selected parts of Ministro source code have been incorporated into the Android platform plugin of Qt. This makes it possible for Qt applications to look native without Ministro, even though applications wishing to use services provided by Ministro will continue to do so. In other words, Qt Widgets applications will look native regardless of the deployment method; system wide or bundled Qt libraries.

Qt Quick Controls

The big news is that Qt 5.4 ships a brand new Android style for Qt Quick Controls. A glimpse of the style in action can be seen in the attached screenshot of the Gallery example taken on a Nexus 5 running Android KitKat 4.4. By the way, the Android style requires Android 3.0 (API level 11) or later. On older devices, a generic QML-based fallback style is used instead.

Android 5.0

Android 5.0 preview

DISCLAIMER: work in progress

Mobile platforms keep moving at a fast pace. While we were working hard to deliver a generic native style that works on any Android version 3.0 or later, a new major version of Android was released. Android 5.0 aka Lollipop introduces a new Material design. It comes with so many changes to the underlying platform style, that we haven’t had enough time to catch up with all of those.

Unfortunately, the Material theme support is not yet on an acceptable level. Qt 5.4.0 applications will therefore default to the Holo theme on Android 5.0. The most notable issues are broken disabled states and some missing tint colors, ripple effects and busy/indeterminate animations (QTBUG-42520 and QTBUG-42644).

For the curious ones who cannot wait until the remaining issues have been tackled, it is possible to set the Material theme in AndroidManifest.xml:

<manifest ...>
  <application ... android:theme="@android:style/Theme.Material.Light">
    ...
  </application>
  ...
</manifest>

The same method can be used for setting the light or dark Holo theme for an application, for example. The values are “Theme.Holo.Light” and “Theme.Holo”, respectively.

Contributing to Qt? Come to Oslo in June 2015!


QtCS_2015It’s half a year since the 2014 Qt Contributors’ Summit, and now is a good time to give an early warning about next years QtCS.

We’ll be inviting you to Oslo in early June to come and discuss the current state and future of Qt.

The Qt Contributors’ Summit is an annual event where the people contributing to Qt gather to have fun, discuss where Qt is going and even code a little.

The plans include a pre-party / hack event before the actual summit at The Qt Company offices, two days of unconference style workshops and an evening out in Oslo.

Oslo is a beautiful city in June well worth a visit, especially when combined with the possibility to meet other Qt contributors.

If you aren’t an active contributor yet, don’t worry, you still have plenty of time to start contributing to Qt. Code, documentation, tests, forum activity, helping new users… everything you do to help out Qt is considered contributing.

QtCS 2014 in Berlin was a great event, let’s make QtCS 2015 even better.
The trolls welcome you to the home of Qt!

Qt 4.8.x Support to be Extended for Another Year


Standard Qt support for Enterprise licensees is for 2 years after the next minor or major Qt release is available. For Qt 4.8 it would mean support ending in December 2014, but we will extend it for a whole year to allow seamless migration to Qt 5.

Originally the support for Qt 4.8.x would have ended on 19th December 2014, 2 years after Qt 5.0.0 was released. We are now extending the standard support for 1 more year, meaning that it will not reach end of life until 19th December 2015. By 19th December 2015, Qt 4.8 will have been supported for four years.  Subsequently we now plan to have a Qt 4.8.7 release in Q1 2015. This is planned to be the last release of Qt 4.8.x series, unless there is a need to provide  an update due to a critical security issue.

So what does this mean for you? Well, if you are entitled to support then it means you can still use Qt 4.8.x safe in the knowledge that you will get the same level of support as before until the 19th December 2015. For older versions,we do have an extended lifetime option which you can find more information about by contacting The Qt Company.

We recommend that applications are ported to Qt 5.x as there are new versions of operating systems and compilers coming out which we can’t guarantee will be supported 100% by Qt 4.8. Qt 5 is a solid platform to migrate to with already version Qt 5.4 coming out soon. Therefore, now is the time to start seriously considering to port any existing applications if you haven’t already started doing so. Porting to Qt 5 is pretty straightforward and the documentation at http://qt-project.org/doc/qt-5/portingguide.html will help with that. If you need help, we and our service partners have services available for porting too – more information can be found at http://www.qt.io/services/.

Qt 5.4 Release Candidate Available


I am happy to announce that Qt 5.4 Release Candidate is now available.

After the Qt5.4 Beta release we have done some build & packaging related updates in addition to large number of error fixes based on feedback from Beta release

  • Mac OS X 10.10 is now used in packaging side
  • Android SDK is updated to 21.02
  • MinGW 4.9.1 is taken in the use
  • ICU is updated to 53-1
  • QtWebEngine is separated as its own installable binary package in the installers component tree

Starting from Qt 5.4 RC, Qt for iOS will be build as a fat binary supporting both 32- and 64-bit builds, fulfilling Apple’s requirement for new apps (see https://developer.apple.com/news/?id=10202014a). It also contains improved support for iPhone6/6+.

Qt 5.4 RC packages also contains Qt Creator 3.3 RC and in commercial packages candidates for new commercial value-add items as well.

Please take a tour & try Qt 5.4 Release candidate! It is quite close to final release so please give your feedback:

Please familiarize yourself to Qt 5.4 known issues wiki. For those who have not yet checked what Qt 5.4 brings, please refer to the Qt 5.4 Beta blog post, the wiki article listing new Qt 5.4 features, or the documentation snapshot for more details.

Qt 5.4 Release Candidate is available via online and offline installers. Installers are available from the Qt Account for commercial users of Qt. Open source users can download installers from qt.io downloads page. Qt 5.4 RC can be updated to existing online installation using the Maintenance Tool and selecting package manager.

Qt Creator 3.3 RC released


We are happy to announce the release of Qt Creator 3.3 RC1. Please have a look at the beta release blog post or the change log, for an overview of the new features and key improvements that are waiting for you in this new minor version.

This is the point where we think that we are almost ready to release 3.3.0, so it is a great time for you to download and try the RC, and give us last minute feedback through our bug tracker, the mailing list, or on IRC (#qt-creator on irc.freenode.net).

You find the opensource version on the Qt Project download page, and Enterprise packages on the Qt Account Portal.