Category: Uncategorized

Using JS for Qt logic

Sorry, but I’m going to have to take sides on this one: Electron sucks. At their core, they convert a program into an embedded web browser with somewhat looser restrictions on interfacing with the host system, only to make it possible for web developers to develop for the desktop without learning a new language. The benefits of Electron are heavily biased toward developers and not the end-user, which is really the target of the entire user experience.

The problem is that HTML5 makes for an abstract machine that is far too abstract, and the more layers of abstraction that sit between the software and the hardware, the more overhead there will exist from translation and just-in-time compilation. The result is a web browser that uses 120 MB of shared memory and 200 MB of resident memory per tab.

For a web browser, though, abstraction is good. It makes development easy without sacrificing user experience for what is essentially a meta-program. But for any other desktop application with a specific defined functionality, this abstraction is excessive.

This is generally why I like Qt: because it is one of the only libraries left that continues to support native desktop and embedded application development without sacrificing performance. The one problem with it, however, is that performance requires the use of C++, and most people do not know how to work C++. Moreover, it requires double the work if the program is also being targeted for the web.

There does exist QML, which removes most of the C++ and exposes a nice declarative syntax that combines both layout and logic into a single file. However, it has two significant problems: first, it adds even more cruft to the output program, and custom functionality still requires interfacing with C++ code, which can get a little difficult.

Qt’s main Achilles’ heel for a long time has been targeting the web. There are various experimental solutions available, but none of them are stable or fast enough to do the job yet.

I’ve been coming up with an idea. Qt exposes its V4 JavaScript engine (a JIT engine) for use in traditional C++ desktop programs. What I could do is the following:

  • Write JS code that both the browser and desktop clients share in common, and then make calls to some abstract interface.
  • Implement the interface uniquely for each respective platform.

For instance, the wiring for most UI code can be written in C++, which then exposes properties and calls events in JS-land. Heck, Qt already does most of that work for us with meta-objects.

How do I maintain the strong contract of an interface? You need a little strong typing, don’t you? Of course, of course – we can always slap in TypeScript, which, naturally, compiles to standards-compliant JavaScript.

The one problem is supporting promises in the JS code that gets run, which mostly relies on the capabilities of the V4 engine. I think they support promises, but it does not seem well documented. Based on this post about invoking async C++ functions asynchronously, I think that I need to write callback-based functions on the C++ side and then promisify the functions when connecting between the JS interface and the C++ side. That shouldn’t be too hard.

Note that important new features for QtJsEngine, such as ES6, were only added in Qt 5.12. This might complicate distribution for Linux (since Qt continues to lag behind in Debian and Ubuntu), but we’ll get there when we get there – it is like thinking about tripping on a rock at the summit of a mountain when we are still at home base.

Indexing the past and present

With the shutdown of GeoCities Japan, we are reaching an important point in the history of the Internet where important historical information is vanishing while being replaced with new information that is hidden away as small snippets of information in social media systems.

It is becoming increasingly apparent that a vast trove of information is simply missing from Google Search. Aggressively pushing for well-ranked sites, user-made sites with obscure but useful information are not as indexed, and their lack of maintenance leads to their loss forever.

For instance, I was only able to find MIDI versions of Pokemon Ruby and Sapphire music from a site hosted by Comcast. After the shutdown of Comcast personal sites, the information was lost to indexing forever and hidden away in the Internet Archive.

What I propose is the indexing and ranking of content in the Internet Archive and social media networks to make a powerful search engine capable of searching, past, present, and realtime data.

A large fault of the Google Search product over the years has been its dumbing down of information during the aggregation process of the Knowledge Engine that inhibits the usefulness of complex queries. If a query is too complex (i.e. contains keywords that are too far apart from each other), Google Search will attempt to ignore some keywords to fit the data that it has indexed, which only fits into particular categories or keywords. If the whole complex query is forced, though, Google Search will be unable to come up with results because it does not index or rank webpages in a way that is optimized for complex queries – not because the information does not exist.

The corpus of information is also diversifying: there is more information in e-books, chat logs, and Facebook conversations than can be found simply by crawling the hypertext. But the Google search engine has not matched this diversification, opting simply to develop the Knowledge Graph to become a primary and secondary source of information.

I think this would be a great direction a search engine such as DuckDuckGo could take to compete more directly with Google Search in a dimension other than privacy. After all, Google Search is no longer Google’s main product.

Migration event soon

I tried to connect to my own website on Friday, but the connection kept timing out. My mind raced with all of these awful thoughts: maybe some script kiddie finally breached the PHP process and decided to bring everything down. Or perhaps a disk failed on the old SCSI RAID array, and now the server is just waiting for me to connect a keyboard and press Enter all the way back at home to start the server in degraded mode.

But alas, the reality was none of it. Upon returning home on Saturday, I entered the attic and saw that the server was off, fans spinning at idle. I impatiently turn it on, the machine roaring to life once again. I supervise the whole process: everything good. Maybe there was a power outage?

Yet more wrong guesses. The culprit was my father, who decided to turn the server off (God knows in what way – did he really push the power button until it turned off?) without any express notice. Later he made an off-hand remark about how he had turned the server off, not knowing that I turned it back on again.

I want – well, now need – to migrate the server. It’s old, it’s heavy, it’s loud, and it’s expensive in power costs (costs about as much as the pool filter in kilowatt-hours per month). It’s pointless to keep it around, and probably embarrassing to explain why I still use it.

My main choices are to throw the site into my Digital Ocean droplet. I could use a Docker container but then I would have to learn how to deal with volatility and general maintenance.

There is also the option to convert everything into Jekyll; the main problem with this is that I am very unfamiliar with Ruby, and I would lose the markup characteristics of HTML (at least that’s the impression they give me). On top of that, I don’t know how to transplant my blog template into a Jekyll template (it’s not my template!) and I don’t want to give into the overused templates they offer. And then after that, where will I host the site? GitHub? There’s no reason for me to push my rants into GitHub, so the world can see what kinds of “contributions” I make every couple of weeks.

Finally, there is the option to move into a Raspberry Pi, which would grant me the benefit of continuing access to my home network, zero maintenance costs (my parents pay for power), and minimal changes to the web stack I currently use.

So immediately before leaving off for college again, at the cost of probably arriving late, I fumbled around for my Raspberry Pi and connected it to the Ethernet port in my room. I guessed the password a couple of times via SSH and then just decided to pull out the keyboard and break into it locally, so that I could remember what the password was. Oh, right, it’s those credentials. I shove the keyboard and dongle back into my duffel bag, gather my other things, and finally set out.

Now, it is my responsibility to get the RPi up to speed, as the new successor of the PowerEdge 2600.

Domain change

After an entirely unexpected drop of the extremely popular domain (yes, visitors from Google, “ is down”!), it became impossible to reach the website via due to an unreachable path to FreeDNS. Thus, I decided to just finish moving to It took a while to figure out how to get WordPress back up and pointing to, but I eventually succeeded.

What I do not know, however, is if I will succeed in finishing the account of the Japan travel. I have been putting that off for too long now. Ugh.

Amazing find!

So I reluctantly got an older version of Pokemon Type Wild and started playing it. I noticed that the music was sounding a bit funkier…. and I found that the music for the older versions for Type Wild were MIDIs!

I find MIDI to be a highly flexible format (you can make the instruments 8-bit, mash it up, …); as such, I cherish MIDI and especially VGMusic for its massive library of MIDIified (and original) game music.

This is a lucky discovery because then I can remake the MIDIs into nicer quality audio files for those who don’t have 500 MB worth of instrument data.


After thinking about making a blog for a very long time, I decided to just download WordPress and go for it. It’s by far the most popular blogging software used on the Internet – and the most sought-for by spammers and h4x0rs. “Blogging” on Word documents, while giving a nice user experience, means absolutely nothing because nobody is reading what you’re writing. Your writings are simply in draft form… forever.

So why not post it on the Web where everyone can read it? Yeah, “everyone” meaning that one guy who found this tiny blog on the corner of the interwebs. Speaking on a podium in an empty auditorium save for that one guy is good practice, I would think. To speak before man, you must speak before his subordinate: the chair.

Wow 130 words so fast? Dang, I’m already getting good at blogging.

This website definitely needs some attention. Maybe I will find some time to fill it with content.