Whats a proper way to design a GUI event system?

  softwareengineering

I’m trying to write a GUI library for a project (also for learning purposes) from scratch (in C++), and it’s working pretty well, but since this is my first attempt i feel that the design of the event system isn’t that great.

basically, how i’m handling events now is that whenever a mouse click occurs i iterate over all the Widgets (buttons, for example) and test whether the mouse coordinates reside in one of them, and if so i call the event callback (a function pointer) which is stored in the button class itself!

so i was thinking of a different (better?) design:
What i want to have instead is an Event-Queue where whenever an event happens (a button clicked for example) i do the same thing as before (i.e. find out which button was clicked) and then construct an object of that event (i.e. ButtonClickedEvent) and add it to the queue and handle it later on, after rendering the frame, for example.

and this is where i’m getting confused about it.
how should i structure the whole thing without creating a mess? in other words, is there a standard-ish way of doing it?

i’ve been looking around for some info about this but all i find is how events can be handled in existing libraries (like java swing, QT, …etc).

so if anyone here could explain how it works, that would be much appreciated!

9

For this answer I will assume that widgets are not, and cannot be operating system windows.


Appetizer: keyboard input.

The idea is that some widget has keyboard focus. You keep track of what widget it is (if you have the means for widgets to indicate they support keyboard input, you do not give keyboard focus to widgets that do not). Then when the event is registered, you send events to widget with keyboard.


Main course: pointing input (mouse, touch or similar).

Yes, you can iterate over everything and check bounds. That is not great.

It is better to use space partitioning. That is you will create a tree structure where each node represents an area of the view port/window. The root is the complete working area, and the leaves match the are of the widgets. Now you can navigate the tree to find out what was clicked.

It is possible to do it even faster: you create a bitmap/texture where the color of each pixel/texel is an id that maps to a widget. Then you simply read them as needed. I seem to recall some browsers do this, not sure.

Whatever structure you pick, don’t forget to update it when widgets move.


Side dish: event queues.

Yes and no. I mean, you probably have a message queue from the operating system already. The usual solution is to rely on that.

Be aware that you might be interested in placing something in the queue (so that other thread can use them to place code that interacts with the UI, so that the UI thread handles it). If using the operating system queue for that is not viable (for example, you want multiple per window, for whatever reason), you create your own.


Drink: event bubbling.

The widget can mark the event as handled or not. If it is not handled, you send the event to the container widget, and so on. That is you bubble up the event.

However, if you want to handle preview events, you propagate them in the opposite order. This can be useful to allow containers to intercept keys, for example.


Dessert: rendering.

How are you going to allow widgets to paint themselves? Remember, I am assuming here they are not operating system windows. You can give them an API that allows them to do graphic primitives. Alternatively, you can give them a graphics buffer/bitmap/texture and let them use it however they want. And, yeah, you need to know the are they take of the window and you are going to run a callback for them to execute their code. You probably also want to do double buffering and vertical synchronization. Yet, that is beyond the scope of this question.

3

LEAVE A COMMENT