Coding shenanigans and other nerdy musings
08/Dec/2023
If you’ve been following the last couple of posts here, or maybe just hearing me scream in real time on Mastodon, you may be aware of two separate and, initially unrelated facts. First, that I’ve been finally giving Godot some hands on attention instead of just platonically admiring all the cool stuff everyone else has been doing with it. And second that after building an entire application with Qt and having QML text input elements simply not work when built for Web Assembly I am feeling absolutely betrayed. My fee-fees are very much hurt.
But, y’know… I was building an interactive application where you can use your keyboard and mouse to do things and the application reacts in real time. And games are just interactive applications where you can use your keyboard and mouse (and others) to do things and the application then reacts in real time.
hmmmm
A lot of these games have some pretty involved interfaces, huh? Lists and grids and whatnot. And Godot DOES deploy to Web Assembly as well. I know it does, I’ve done it myself and have stuff you can play with in this very website
Well, CLEARLY there is only one answer here. Let’s make some GaMeS here, shall we?
So obviously, the first thing is to just verify how feasible all of this is. I code C++, technically I don’t need anything else on top, I COULD just code by hand every single thing. I’m familiar with SDL, even THAT would already handle all the creation of windows and intercepting of input events and whatnot. But if you’ve coded any application that’s even remotely involved, you know that’s a somewhat naive statement.
Making windows and finding out if the user is clicking on the screen are only the baby steps. If you want to make any sort of application that responds WELL and feels good to use, there is a lot of stuff. Different controls, visibility of things, laying your items out… And what if the user just resizes your window? Will things be on top of one another now? How many views/states/pages you have? How do you flick between them? Okay, you’ve done it all but… what if a user wants to run it on Windows? Does all your drawing stuff work there as well?
And all of that is BASICALLY just talking about drawing stuff on screen. There’s opening and saving files, spawning dialogs, playing sounds, loading media…. Anything more than a hello-world type thing like, say, the baseline toyBrot applications will have a bunch of things to consider. So something like an application framework actually has a lot of expectations on top of it, there’s a lot of bases that need to be covered and we need to know if Godot can cover those bases.
Lucky for us, there’s a really good case study that shows how involved and complex a Godot application can get. The Godot Editor itself
In an honoured tradition, yeah, the Godot Editor is built with Godot itself. So, if it can do it, there IS a way you can do it too. And uuuhh… this application does A LOT. It splits is main UI in resizeable elements, it opens dialogs for saving and loading things, it can split in multiple windows, render stuff on the fly… And just to flex they have a WebAssembly build of it you can just go and use. And you can even type on it!
I don’t know about you but, to me, that looks like a pretty compelling case. So let’s take a closer look
As part of me finally getting my hands dirty when it comes to Godot, I’ve decided to try and keep a “minimum level of contact” with it so that I never slide too far off and things start becoming unfamiliar again. Given how many different personal projects I seem to be juggling all of the time, this is a very real danger. Foreteller, toyBrot, general Godot, unnanounced game project, just day to day maintenance and expansion of my own IT infrastructure (I self host all of stuff, including this blog)… it’s A LOT
So the “Mini Monthlies” are my attempt to avoid suddenly coming to the realisation that it’s been 5 months since I’ve even opened the Godot Editor. The idea, as the name implies, is to have a tiny project that I go through each month. Can be a demo, a prototype, a feature I’m implementing or exploring… the only requirement is that it has to be something I get done in a day or two and, at the end, I have something to show for it (preferably something I can upload here)
A lot of the inspiration for this comes from how much fun I had with the Zenva mini projects I did throughout their lessons. They were all really fun to go through and then later on having a tiny game you can point to is super rewarding. It IS always easier when you’re being guided through something BUT this also means I get to practice properly scoping these things as well. It’s just way too easy to simply go on adding more random stuff forever. There’s always more polish to be done but you need to put the cloth down at SOME point
This project is the first of these Godot Mini Monthlies. I went through all of the coursework, had a lot of fun and then got started in Cartomancer. That ended up taking basically all of my attention throughout November. After that, I took a couple of days for this since I had a pretty good idea of what I specifically wanted to accomplish and got the mini out just in time, even if the post itself ended up in a queue as I want to give each post at least a week’s breathing time
Since then I have actually revisited this project again for December’s mini monthly, which was also exciting, but that is a tale for the next post =)
Before we dive in, though, I think it’s important to get something out of the way first and that’s where I’m coming from. I’ve worked a little with Java back in the day, a tiny bit with WPF and, as mentioned, some on my own with SDL. The bulk of my experience as an application developer, though, in particular for more heavyweight projects, has been with C++ and, often, Qt. I’ve used a fair amount of QWidgets and a LOT of QML. I imagine that for other people familiar with Qt, we know of one thing that I don’t think my introduction makes clear: being compared to Qt in the context of “how good of an application framework you are” is Nightmare Mode. Make sure you have your brown pants on.
I’m pretty mad at Qt due to some rough experiences recently but make no mistake, Qt is incredible, and in WAY more ways than just being the best way to make GUIs, which… spoilers, I guess, I still haven’t seen anything that comes anywhere close to QML, and that’s WITH it introducing a super annoying separation layer between the logical contexts of the GUI and the application core, where there’s a whole QML specific area of fiddliness in sending data between it and the C++ layer of the application and making sure it stays consistent on both ends (it’s not super bad, but has some very specific things you do need to learn how to tie together).
Qt is open source, you can use like 90% of it through the LGPL, most of what’s GPL is like, in-app purchases and other stuff that’s not crippling you for most general purpose things. It’s multi-platform too. You can target Windows, Linux, Mac, WebAssembly, embedded devices… A bunch of my favourite applications in their category are all Qt. KDE Plasma, Telegram, qBitTorrent, Qt Creator… Even if the C++ would put you off, there’s pyQt right there (but don’t be afraid, tho. C++ is pretty great actually. All the haters are just afraid of computers =P)
The question of “why would I ever even consider using this instead of just going Qt, lol?” is a very real one. If that hypothetical dev asking the question already KNOWS Qt, that is a really hard question to answer. And I’m that dev, and I know Qt, I love it to bits. I like signals in Godot precisely because I know and love them from Qt.
This is not to say there aren’t well known answers to that question. Qt commercial license is on the pricey side, especially for random individual devs. The open licensing model can be a deal breaker for people: I was planning on distributing Foreteller as a web app and that means it HAS to be GPL. Qt is notably a HUGE dependency to bring in. This is not like “I’m going to bring this GUI framework in”. Nah, you’re bringing in TWO separate GUI frameworks, a whole system for controlling application workflow and lifecycle, localization, its own rich string manipulation system, a hierarchical application structure, a property system for your classes, a threading library… even if you just want the GUI, you’re still bringing in most of that anyway, in some ways. If you use it as open source, you need to be able to provide the source. I have two tarballs here for that, they are 700 and 760 MBs in size.
But the upsides are simply amazing. This is a huge, MASSIVE mountain to climb and to even be put up for consideration is already a victory in and of itself. Usinq Qt as your GUI framework? Get ready for happy times. If you’re, then, comfortable with proper marrying your application to Qt, at a deeper level, you’re going to live a really pampered life. There’s this whole subset of C++ devs who swear by boost and MY gut reaction has always been “if I’m bringing a massive random dependency anyway, why wouldn’t I just go Qt?”. Never found an answer to THAT, I’ve used Qt even for a daemon, on a cloud server. It was good too, you absolutely don’t need the GUI to justify it.
I’m not sure how seriously I would’ve considered Godot if I hadn’t had the bad experience I had with Foreteller, and I was already super hyped with Godot at the time I started it. This “dismissal” is not on Godot’s own merits; I don’t think it’s possible to look at the Godot Editor and feel like you’re not sure if Godot is a framework that can pull its weight. But… y’know… “Godot Engine” and “Qt Creator” live in the same folder in my Plasma application menu. If I’m thinking of opening one of those, the other is right there.
Even limiting the scope, situation is still rough. In this post specifically, I’m going to be mostly talking about GUI and that’s a way smaller problem to tackle. Godot doesn’t have to compare to ALL of Qt. It doesn’t even need to go against QWidgets as I don’t really use those much. It only really has to measure up against QML: the uncontested best system for making GUIs I’ve ever seen. You know? That one I know of nothing that comes even remotely close? Yeah, just that one, way smaller of a challenge.
And even with all of that said, Godot very much IS on the table here. It’s kicked the door down and demanded consideration. If you were getting the impression that all of this scaffolding and qualifying meant I was about to drag it through the dirt… well, let’s just say that when Godot saw what a mountain that was, it went and got the bloody climbing gear on
I mentioned before that this was my plan for November/23’s Mini Monthly. If you’ve played around with the demos I mentioned on the previous Godot post you may have noticed that they’re all very light on the GUI.
This is something that was bugging me. I messed a lot with a bunch of Node2D, a little bit with some Node3D (though not in these demos) but basically nothing on the Control side. So my plan for this month was to make something that would be mostly, if not all GUI. My choice ended up being ye olde reliable, ToyBrot.
But toyBrot can also output some pretty cool images and it’s just fun seeing how the fractal behaves as you tweak the parameters, so I’ve always wanted to have an interactive version. All of this put together means toyBrot is an application where the code itself is well-known and I already have it and that I’ve always wanted to turn into an application with a GUI that we can use to check things in real time. And this GUI is most of what’s missing? And on top of that, I wanted a GUI focused project I could do in a short amount of time? Sounds like we have a winner!
So we have a goal, then. What’s in our tool belt that’ll get us there?
Godot being a game engine is particularly helpful here. The way to implement raymarching and get it running fast is to send to the GPU and the easiest way to do that is through shaders. In regular ToyBrot I normally do that through compute shaders but that’s largely because what I want to look at is compute. If you don’t have that specific framing in mind and can engage the problem in its own terms instead, suddenly raymarching sounds A LOT like it’s a fragment shader thing
If this is starting to lose you, don’t fret, here’s a quick explainer. I’m also not going to be talking ABOUT these things, this is just a black box starting point, the fun will be all outside of this box
Raymarching is the technique I use to calculate the fractal. I explain a little bit more in a previous Multi Your Threads post but here’s the shortest version.
For each pixel on the screen, you imagine a ray that goes from the camera origin to that exact pixel. Now, besides the ray, you have a function that tells you how far a point in space is from a certain object, for us, the object is the fractal. So you ask, “okay, how far are we?” And if we’re not IN the object, we then move along the ray a bit and ask “okay, how far are we now?”
If we ever HIT the object, we’ll known, we’re “less than 0 far”. And, at that point, we known how many steps it took to get there. If we just keep increasing the distance we give up and say “yeah, this doesn’t hit”
Knowing how many steps we needed to take, and where the point is in space, we can then know what to draw for that pixel. If you do that for every pixel in the screen, you can draw a whole image. And wouldn’t you know it, doing a calculation for every pixel is kind of what fragment shaders are for.
Whether they’re stuff you code explicitly for effects or something that happens under the hood as you say “draw this image, green is transparent, kthxbai”, shaders are very important to games and Godot, as an engine has support to help you code and use shaders. That’s basically our “operational” side mostly taken care of. Just port over an OpenGL implementation of toyBrot and it’s more or less done. Easy
I found this little snippet that uses a fragment shader to render a raymarched object within a cube. Plugged my distance function and colouring in, declared the parameters I wanted as inputs and it was pretty much done. I didn’t want to waste time with sorting the raymarching out and I didn’t. Mission accomplished!
I also wanted to have the user be able to move the camera, fly within and around the fractal, look for places where it takes interesting shapes. Regular toyBrot is not good at this, you have to manually type in your camera coords and where’s it looking, but now we’re in a game engine, this is literally made to build real time first person controllers with! But, just like with the fractal generation itself, this is not something I wanted to have to think about or work in. In a shocking demonstration of scope discipline, I just wanted to find something more or less ready made that I could plug in and forget about. Didn’t need to be great, just needed to get the job done. After a blissfully short looking about, I found a simple script you can attach to your Camera3D and call it a day. Yup, that’s the one, perfect!
This meant I could focus on the GUI and check out what sort of GUI controls Godot had to offer. It’s a pretty good library of controls, all your usual suspects are there
I don’t know about you, but that looks like a pretty great starting point to me. Having this sort of variety is reassuring, it gives me a sense that I can just think about “what would be a nice way to interact with this application” without worrying about whether I can make the GUI do the thing I want. One of the reasons I like QML so much is that this is essentially a non issue. I’m not sure I can imagine an application doing something that I couldn’t get it to do, at the very least nothing reasonable. And a solid set of controls is a fantastic starting point.
Variety of provided components is not all there is, though. Another key area here is making sure you can position those components correctly. This is an area where I have some reservations towards the usual no brainer way of doing things in Godot: “Just drag stuff where you want it to be”. This sort of approach works well enough when you’re positioning game elements, for the most part. It’s usually important to make sure your collectable is in the correct position quickly, as you need to place like 20 of those. Less important, a lot of the time, is to make sure that each one is at a precise location, exactly and consistently 46 pixels part.
When coding for GUI, though, the latter becomes much more important. For anything with actual structure, rather than a simple standalone menu or a throwaway demo GUI, you don’t want to just be eyeballing things and calling it a day. In my comfort zone, QML, there are two ways of doing this, and they’re super helpful. You can anchor items, so that their position is relative to a different item or you can have your item be managed by a Layout, which has its own set of rules as to how it position and sizes all of its children. These are super useful and make your life way easier. In Godot you also have two ways of doing this positioning, and they’re anchors and Containers. Anchors are used to position items relative to another item and Containers are elements which have their own set of rules they use to position and size all of their children. So basically the same idea as a QML Layout.
This means that, as far what the concepts behind the tools I’m provided are, I’m in familiar territory here. I can think of, say, a menu, in terms of “so this anchors to the top and the right and, then, the element itself is column layout where each row is….”. The path to translating the visuals to code is already there from my previous dev experience. Guess all that’s left now is to actually get my hands dirty and see what I can do
So, containers/layouts MAY be a bit of a novel idea for people who haven’t had to deal much with application UI. The gist is that they’re elements which define special rules to how their children need to be placed. Each container type has its own set of rules. For toyBrot here I only really had to use vertical and horizontal BoxContainers.
These would be similar to Row and Column layouts in QML. A VBoxContainer arranges its children as a vertical list. In the screenshot above, each line or row of items is a different child. Some times it’s a single item, like a separator or a label, and some times it’s an HBoxLayout. By default items are just tightly packed and stacked from top to bottom, but you can change that both on the container itself, as well as on an individual item.
The container itself has its own general rules but before “making the final decision” on where to place the item relative to itself, it also consults the node’s Layout properties. In the example above, the Slider element explicitly asks to be put towards the right, and it has a minimum horizontal size of 200 pixels.
Label element, the one which says “Range” in here, has the defaults. So it’s just to the left and as big as it would normally be. Since it and the Slider are asking to be placed in opposite ends of whatever space they have, this leaves space for the ValueLabel element to exist between them. This one doesn’t have a minimum size, but also asks to be placed on the right, aligning it towards the Slider Of note here, is that the order that the components are placed in the Container is important. This is something to keep an eye out for. The “Container Sizing” options, determine how the item is placed WITHIN the space that the Container allocates it
Different types of containers will have their own different rules. There’s containers for arranging things in grids, for centering elements within a space, containers that work like regular box containers but “loop onto new lines” if you would go past a size limit…. Even the resizable panes of the Godot Editor are SplitContainers. These elements need to do double duty as both the tools to making game UIs with inventory screens, equipment menus, shop panes, character stat screens…. and at the same time they also end up serving a lot of generic application use cases.
It DID take me a little bit to get around to understanding the way you massage the `Layout` properties just right to get the behaviours you want but past that initial hurdle, I’m pretty happy with them. They integrate well with the editor, it’s easy to see what you’re doing and you can get some pretty nice UIs done with them. It’s definitely not something that I would dread as a pain point if I were to be making any old regular application or tool with Godot.
So Containers are super useful but sometimes you end up just kind of having to manually place things, like, say, when you need to place the Containers themselves. This is where Anchors come in, they’re a relative point of reference for your item to position itself. They come accompanied by a set of margins, so if you have 20 on your “left margin”, that means your container will always leave a gap of 20 pixels to its left
So the next obvious question is: since anchors are relative, what are they relative to? And this is where Godot let me down a little bit, the spoiled baby that I am. Your anchor is relative to your direct parent. Either a control or, if there is no control above, the viewport itself. More specifically, the anchor is the top left corner of that parent. This all makes sense but it IS a bit limiting. For comparison, here’s a quick example of how flexible this is in QML
Window
{
width: 1280
height: 720
visible: true
title: qsTr("Hello World")
Rectangle
{
id: bg
anchors.fill: parent
color: "#333333"
}
Rectangle
{
id: rect0
color: "hotpink"
width: 100
height: 50
anchors
{
top: parent.top
topMargin: 10
left: parent.left
leftMargin: 15
}
}
Rectangle
{
id: rect1
color: "yellow"
width: 100
height: 50
anchors
{
top: rect0.bottom
topMargin: 5
left: rect0.right
leftMargin: 5
}
}
Rectangle
{
id: rect2
color: "mediumspringgreen"
width: 150
height: 50
anchors
{
top: rect0.top
topMargin: 0
left: rect1.right
leftMargin: 5
}
}
Rectangle
{
id: rect3
color: "lightcyan"
width: 100
height: 50
anchors
{
top: rect1.bottom
topMargin: rect1.height
right: rect2.right
leftMargin: 5
}
}
}
You can specify different anchoring points for each of your anchors and these don’t even need to refer to the same item, as long as they’re referring of an anchor that is of the same type (horizontal/vertical). In addition to Left / Right / Top / Bottom there are also anchors for Horizontal and Vertical Centers. You’re also not limited to your direct parent when anchoring, you can anchor to any item that is A parent or a direct sibling. So these rectangles can anchor relative to each other. If any of their dimensions or position changes, the whole thing rearranges itself automatically
Things like THIS is why I say that familiarity with QML sets a very high bar for other UI frameworks. This makes it really easy to organise items very very easily within your application. And QML does it all without you needing a visual editor. There IS one in Qt Creator, I know Qt works on it, I keep seeing mentions to it in the changelogs but… can’t tell you how good it is, never even wanted to use it.
And not having THIS sort of flexibility with Godot’s anchors is something that does end up limiting what you can do and how you can massage things together. At the very least, there’s a lot of things which become harder. But this is not to say that Godot’s anchors are not fit for purpose of that you can’t get them to help you instead. You DO get some really handy features there, even if they’re a little bit obtuse in (this admittedly unfair) comparison.
You MAY have noticed in the last Godot Editor screenshot that in addition to margins, there is also a set of controls to the anchors themselves. And these DO help a lot. These controls range from 0 to 1 with 0,0 being top left, and 1,1, being bottom right. This means that if you set your Bottom anchor to 0.5, that will tell your component to expand halfway down your item.
You can see that your minimum size is dependent on your item, it’ll try its best. But it WILL expand to fill the area if required. For as much as I do whine about it not being quite as good as QML, this already gives you a lot of ways to make your life easier.
Another thing of note is that this DOES feel a bit jank in at least one more way. In the above examples you can see that the margin for right and bottom is negative, and also that its proper name is Anchor Offsets. This is a product of this mentality of always counting from the top left, which it by default does. If you want to count a space from the right, you need to adjust your anchor and then make sure it’s a NEGATIVE offset, because that’s a literal pixel offset, not an actual margin.
Once you have those details in mind, it isn’t actually too bad, You CAN get Godot Controls to be in the sort of places and sizes you want through these margins. You even get one invaluable thing for free! Since your Anchor Points adjustments are already proportional, you get proportional resizing for free. If you set a panel to 0, 0, 0.5, 1. That will ALWAYS cover the entire height of the screen and the left-side half of it
So far we had a look at the types of controls we have and what options we have to put them where we want. But they still don’t actually control anything, so that’s the last step and if you’ve done any sort of Godot coding you probably already have an idea about where we’re going: these controls have signals.
In this sense, both Godot and Qt take the same approach to making the control do something, and that is really take the logic of the something out of the control. When you interact with a control (type something in, drag a slider, toggle a switch) it “emits a signal”, which is to say: “It creates an event”. And you can hook these events up to handlers in scripts. The editor itself can help you do this hooking up
Each signal represents an event you MAY want to listen to. You can “connect” the signal to a regular function (in here the _on_color_btn_pressed() function from the SideBar) and then every time it gets emitted, the function gets called and handles the logic. This type of flow is not exclusive to controls in Godot, this is, for example, the way collision is handled in Godot and it’s a pretty flexible system, you can define your own signals from scripts, connect and emit them whenever you need…. And maybe one of these actions can be just emitting more signals. In fact, that’s basically what all of those sliders are doing
func _on_plain_color_changed(color):
settings.plain_colour = color
emit_signal("settings_updated")
func _on_hue_factor_changed(value):
hueFactorLabel.text = str(value)
settings.hue_factor = value
emit_signal("settings_updated")
func _on_hue_offset_changed(value):
hueOffsetLabel.text = str(value)
settings.hue_offset = value
emit_signal("settings_updated")
func _on_value_factor_changed(value):
valueFactorLabel.text = str(value)
settings.value_factor = value
emit_signal("settings_updated")
func _on_value_range_changed(value):
valueRangeLabel.text = str(value)
settings.value_range = value
emit_signal("settings_updated")
func _on_value_clamp_changed(value):
valueClampLabel.text = str(value)
settings.value_clamp = value
emit_signal("settings_updated")
This settings_updated signal in turn gets captured by the root node’s script which updates the shader’s parameters to change how the fractal is drawn. It’s all signals, all the way down!
The one thing that ends up being fiddly for different reasons for QML and Godot is tying the interface and the code together. They have almost diametrically opposed problems. In QML the issue is that the C++ and the QML contexts of your application are isolated. By default neither knows what goes inside each other. So if you need to connect a signal from C++ inside QML or vice versa, or even just sending data around, say, to initialise a control, doing that has a layer of finickyness as you need to essentially manually walk through this decision and bring a telephone cable with you that you can make sure it’s plugged into to make the bridge…. for this one specific thing, mind you. You need to do this for each thing you need to pass around. As general software design practice, this is good, you’re isolating your presentation from your logic. But from a development point of view, yeah, it is a whole area to get angry in and of itself. But what about Godot?
Well, in Godot, this separation just doesn’t really exist. Everyone’s just part of the same scene tree and between that and how interchangeable a lot of the time it is to either work on the editor or code stuff yourself, it can feel like a bit of a big soup where everything is just mixed together. The editor helps you a lot but, by doing that, it also ends up imposing a certain structure on you. If you connect a signal through the editor, that information gets saved on your scene and your script logic is none the wiser, you don’t have that relationship expressed in your code. But if you connect a signal through a script instead, it’s the editor which now doesn’t know and can’t show you the indicators on the scene tree, even if your script is a @tool script (a script that runs in the editor itself too, not just when the application is running). The upside to this proximity DOES end up meaning you can just access stuff but even that’s not quite as smooth as it sounds. When I say valueRangeLabel.text up there, this is a reference to another UI element and I have to tell the editor what element this actually is. The way I do this is through the script and it’s…not pretty
#sidebar.gd
<...>
var colorModeToggle : CheckButton
var bgColourPicker : ColorPickerButton
var plainColourPicker : ColorPickerButton
var hueFactorSlider : HSlider
var hueFactorLabel : Label
var hueOffsetSlider : HSlider
var hueOffsetLabel : Label
var valueFactorSlider : HSlider
var valueFactorLabel : Label
var valueRangeSlider : HSlider
var valueRangeLabel : Label
var valueClampSlider : HSlider
var valueClampLabel : Label
var saturationSlider : HSlider
var satLabel : Label
var frsqSlider : HSlider
var frsqLabel : Label
var mrsqSlider : HSlider
var mrsqLabel : Label
var foldlimSlider : HSlider
var foldlimLabel : Label
var scaleSlider : HSlider
var scaleLabel : Label
var iterationsSlider : HSlider
var iterLabel : Label
var maxstepsInput : LineEdit
var colldistInput : LineEdit
var plainColorList : VBoxContainer
var rainbowColorList : VBoxContainer
<...>
func _ready():
_assign_aliases()
<...>
func _assign_aliases() -> void:
colorModeToggle = $MenuPanel/ColorPanel/VBoxContainer/row_colormode/Toggle as CheckButton
bgColourPicker = $MenuPanel/ColorPanel/VBoxContainer/row_bg/Picker as ColorPickerButton
plainColourPicker = $MenuPanel/ColorPanel/VBoxContainer/PlainList/row_plain/Picker as ColorPickerButton
hueFactorSlider = $MenuPanel/ColorPanel/VBoxContainer/RainbowList/row_huefactor/Slider as HSlider
hueFactorLabel = $MenuPanel/ColorPanel/VBoxContainer/RainbowList/row_huefactor/ValueLabel as Label
hueOffsetSlider = $MenuPanel/ColorPanel/VBoxContainer/RainbowList/row_hueoffset/Slider as HSlider
hueOffsetLabel = $MenuPanel/ColorPanel/VBoxContainer/RainbowList/row_hueoffset/ValueLabel as Label
valueFactorSlider = $MenuPanel/ColorPanel/VBoxContainer/row_valfactor/Slider as HSlider
valueFactorLabel = $MenuPanel/ColorPanel/VBoxContainer/row_valfactor/ValueLabel as Label
valueRangeSlider = $MenuPanel/ColorPanel/VBoxContainer/row_valrange/Slider as HSlider
valueRangeLabel = $MenuPanel/ColorPanel/VBoxContainer/row_valrange/ValueLabel as Label
valueClampSlider = $MenuPanel/ColorPanel/VBoxContainer/row_valclamp/Slider as HSlider
valueClampLabel = $MenuPanel/ColorPanel/VBoxContainer/row_valclamp/ValueLabel as Label
saturationSlider = $MenuPanel/ColorPanel/VBoxContainer/RainbowList/row_sat/Slider as HSlider
satLabel = $MenuPanel/ColorPanel/VBoxContainer/RainbowList/row_sat/ValueLabel as Label
frsqSlider = $MenuPanel/ParamPanel/ParamList/row_frsq/Slider as HSlider
frsqLabel = $MenuPanel/ParamPanel/ParamList/row_frsq/ValueLabel as Label
mrsqSlider = $MenuPanel/ParamPanel/ParamList/row_mrsq/Slider as HSlider
mrsqLabel = $MenuPanel/ParamPanel/ParamList/row_mrsq/ValueLabel as Label
foldlimSlider = $MenuPanel/ParamPanel/ParamList/row_foldlim/Slider as HSlider
foldlimLabel = $MenuPanel/ParamPanel/ParamList/row_foldlim/ValueLabel as Label
scaleSlider = $MenuPanel/ParamPanel/ParamList/row_scale/Slider as HSlider
scaleLabel = $MenuPanel/ParamPanel/ParamList/row_scale/ValueLabel as Label
iterationsSlider = $MenuPanel/ParamPanel/ParamList/row_iter/Slider as HSlider
iterLabel = $MenuPanel/ParamPanel/ParamList/row_iter/ValueLabel as Label
maxstepsInput = $MenuPanel/ParamPanel/ParamList/row_maxsteps/Input as LineEdit
colldistInput = $MenuPanel/ParamPanel/ParamList/row_colldist/Input as LineEdit
plainColorList = $MenuPanel/ColorPanel/VBoxContainer/PlainList as VBoxContainer
rainbowColorList = $MenuPanel/ColorPanel/VBoxContainer/RainbowList as VBoxContainer
<...>
It’s just a list of aliases I’m hard-coding and manually assigning @onready and honestly I kind of hate it, but I don’t think there’s a much better way of doing it UNLESS I’m also just doing the entire creation of the assets through scripts, so just add_child and going wild. But even if that was a sane way of doing and setting stuff up, that would mean I’d have no editor preview (maybe if it was a @tool script, I think?) and in general adjusting things WOULD be fiddlier in quite a few ways.
Additionally, since these are all hard coded paths relative paths in the scene tree, it also introduces some additional rigidity. I’d like to be able to toggle the UI’s visibility and the easiest way to do that would be to spawn a separate CanvasLayer that has all the things I want to show and hide. Then I can just toggle the visibility of THAT. But each thing that is hard-coded this way then risks breaking because their paths might change if I’m reparenting stuff. If I decide I want to make that row a scene, same thing. If I change the name of one of those nodes, all these assignments break.
So, like most things in life, this approach has its own drawbacks to deal with. To me none of these are deal breakers but I DO like the explicit context separation that QML brings, in a similar way that I do like how, say, OpenCL also makes this separation explicit. Really annoying to deal with in both situations, but to me it’s a more sound application structure in general. Regardless of these opinions, once you make sure you can refer to things and start connecting signals, that’s it. Your interface can actually start doing stuff from this point on!
UPDATE: Frozen Fractal on Fedi has pointed me towards the existence of scene-unique node names which improve this reference situation a LOT. It’s basically an editor-native way of doing this aliasing I’m doing by hand
From from everything we’ve looked at, it would seem Godot does indeed have a good set of tools for just coding good old “regular applications” and not just games. Personally, I would call this first experience a very successful one!
Plus, again, it’s on the web and you can even type in the text fields! The free camera works well enough (use WASDQE to move around, right click and drag to turn. There’s no instructions IN the app itself at time of writing. Watch out for gestures from you OS or browser with the right click and drag).
For what ended up being a two-day project to learn all of this, I’m very happy with the result and would say that Godot, as it turns out, IS a valid option to consider as a multi-platform application framework. At the very least as a GUI framework.
I’ve stated before that this to me, personally, is all in comparison to Qt and what THAT framework offers and, in that aspect, I think it’s important to add the caveat that, this is some serious case of apples to oranges. For stuff outside of games, Qt just offers WAY too much. But it all depends on whether you NEED all that extra stuff. With this being such a small project, too, there is still a lot to investigate within Godot. For example, I didn’t do any file system operation, have never touched any of the C# support within Godot, intentionally focusing on gdscript to “learn the standard” before I go off on my own. And even beyond that there’s a good chance I just skip on the C# entirely and go straight to integrating C++ into Godot. If going that route and using Godot merely as a UI layer is easy, then a LOT of doors open up. The same support for C++ is also available through community support for Rust, for example.
Godot has been maturing rapidly and it’s already quite impressive. Right now, due to my problems with Qt, it’s probably my go-to choice for anything I want to deploy on the web, game or not. If you’re on the look for something along these lines, I’d definitely recommend giving Godot a chance. And if you’re someone who came INTO Godot through games, there’s a lot more you can do with it than one might initially think.
So for interactive toyBrot, there are still some things I’d like to do. Besides actually trying to make it a bit more real-time friendly (something I’ve put next to no effort in), I’d also like to add the ability to take and save screenshots, as well as save and load parameter sets. If you find a combination that generates a particularly interesting fractal, you could save and share that.
Another thing that has been on my mind is UI scaling, and generally high resolution support in general. Right now I’ve added no controls for it and, as such, text looks absolutely tiny if you open this is a 4K screen. More broadly, my other godot demos also have some emscripten related scaling issues and that’s something I’ve long wanted to tackle. It might be a Godot mini project in the near future.
As for The Great Refactoring, I’m less sure. I used to mostly post Multi Your Threads here but not only there’s been quite a while since I’ve done so, I also lost both my better video cards in the main workstation, so things are a bit more limited here. Being forced into an nVidia card does have side effects other than broken drivers which don’t work properly with Wayland in <current year argument> though. Now that ComputeCpp is gone in the wake of Intel buying Codeplay, I guess it’s time to put OneAPI to the test and see if I can get that implementation running on this Maxwell. We’ll see where we end up but I definitely want to return to posting here more regularly so, even if it’s something small, I’ll try and keep it coming. Until then <3
Ad-blocker not detected
Consider installing a browser extension that blocks ads and other malicious scripts in your browser to protect your privacy and security. Learn more.