Category: Uncategorized

Most stripped-down version of Chromium

I am curious how far Chromium can be stripped down for embedded applications. The natural thing to do, of course, is to not use HTML for embedded applications. But I wonder: can one strip down Chromium to its bare minimum? No video or audio, no WebGL, no MIDI, no CSS 3D animations, no WebSockets, no profiling, no extensions, no developer console – basically nothing except the layout engine, a JS engine, and a basic renderer. The target is to minimize binary size and memory footprint while still drawing from an actively maintained code base, as HTML 4 engines are not really maintained anymore, and many modern websites do not render properly on HTML 4 engines.

Just an idle thought. No action is needed.

Moral dilemma (public)

I know when I’ve wasted my time away when my joints feel stiff, my eyes inflamed, and my mind unstimulated. Instead, I’m playing TF2 to waste the time away, as I had originally planned to watch a movie, but the plans had fallen through.

It’s difficult to stay motivated when most of my friends are in Austin, doing whatever. For me, the only real excitement will be flying to the East Coast again for the yearly plane competition, but undoubtedly there will be an extremely stressful problem that will counterbalance the excitement.

Sunday. Watching Detective Pikachu with a small child laughing at my side apparently did not do enough to abate my conscience that was too busy reeling from an unsettling headline spotted in the bookstore only about half an hour before the movie.

I tried to imagine how my friend would take the movie. She would probably laugh; maybe I take movies too seriously. But what bothers me is that the film lacked any inherent meaning. It did not cause me to think, perhaps because I was too busy cringing from listening to Ryan Reynolds casting as the voice of a squeaking little thing as a Pikachu. While it was explained at the end of the movie the true reason behind this, I would rather have had a lady play the role.

I left the theater reconsidering my life decisions again, while my brother balked about how he was “fine” with watching meaningless movies as long as they entertained him. He then suggested some profound Japanese films, such as Ghost in the Shell. I said yes, anything. I don’t care how mind-boggling the subject matter is anymore. At this point, I could probably take it.

I need to begin a major research effort on the topic of morality. I do not feel accepted in the hacker community due to my beliefs and values – or rather, the fact that I even have any to begin with. The topic of religion is taboo, and any political belief other than the default liberal stance commonplace in the East Coast is viewed as if it were 21st-century animism.

From the liberal standpoint, I see a completely different definition of law. Law is a set of policies enforced by the government to promote and ensure human progress. What constitutes steps forward and not steps back is entirely at the discretion of politicians. Individuals are free to make any choice as long as it does not negatively impact the rest of society.

From the conservative standpoint, law is an enforcement of moral standards designed to preserve stability. Tradition, rituals, and a strong set of values define a conservative culture where deviation from the norm is taught as a telltale sign for danger. The reasoning goes that stability has led to success, and therefore individuals should be guided into making consistent choices that lead to the making of a predictably successful society.

There is some truth to every philosophical argument posed, but we seem to have reached a momentous period where political groups seek to make changes with incompatible ethical theories. As a result, we end up with a legal code that is really a patchwork of laws based on inconsistent theories of interpretation, all of which must then be carefully weaved through in a court of law. It is like 200-year-old source code, with hacks built on top of each other, and the drastic changes in design principles and paradigms across the years can be seen as clearly as tree rings on a trunk.

Since it does not seem possible to agree on a universal ethical framework, and since the currently-established ethical frameworks are primarily along cultural lines, perhaps the world’s jurisdiction should be divided back into its respective cultures. While multiculturalism has shown that people of many cultures are perfectly capable of collaborating, and humans are perfectly willing to assimilate to the dominant culture, it does not discount the idea that culture defines discrete boundaries between the in-group and the out-group. It is the very nature of how humans identify who is an enemy and who is a friend. If these hot-button debates continue in the United States, the heterogeneity of culture might mean that soon Americans will see everyone as a potential enemy. We will see how true this really is as the story unfolds.

In the meantime, it is difficult to entertain myself when existentialism clouds my thoughts.

Using OpenPGP for video games

Cryptography seems to be all the rage these days: it is a method to strongly prove the occurrence of an event, and no one can question the universal truth of mathematics. The blockchain hype merely serves as witness to this trend toward cryptographic verification.

I see cryptography as a topic long avoided by mainstream software developers due to its core functionality being backed by pure math, a field which software developers are either not fond of or not competent in (or both). It is often perceived as a “black box” which ought not to be touched or recreated, lest one’s application be infested with thousands of security issues. However, cryptography is not purely math, and today, well-tested abstractions exist to make common cryptographic applications understandable and implementable.

Online video games tend to be backed by a central or master server, which places two main liabilities on the part of the maintainer of the game:

  • the responsibility of the maintainer to secure personal information within the server (such as email addresses and passwords) and to report security breaches; and
  • the regular maintenance of the server, which is essential to maintaining the ability for players to use the game.

However, as time progresses, the maintainer of a game is more likely to renege on these responsibilities for economic reasons, either causing the player’s experience to be significantly degraded or rendering the game entirely unplayable.

However, replacing a master server is not the main focus of this writing; rather, I wish to discuss an important issue in online games: who to trust.

I wish we could trust everyone – however, when a game gets sufficiently large, it becomes statistically likely for a player to decide to cheat or flood a server to attempt to break the game experience – in which case now there is one player we cannot trust.

With the rising popularity of virtual private networks and proxies, anonymity is king. It is nearly impossible to uniquely identify a player without prompting for a particular set of credentials by a master server. Even then, it is easy for a banned player to create new credentials and begin cheating or spamming again – the natural response is to increase the amount of personal information prompted by the central server, but this likewise increases the liability held by the owner.

One approach to mitigate this problem is by employing a cryptosystem specifically designed to solve the problem of trust – in our case, who should be allowed into a server, and who should not. OpenPGP is one such well-established system that uses a decentralized web of trust to systematically determine who can be trusted, without the liability of a centralized server or unintentionally restricting legitimate users from playing (such as banning an excessively large IP range, or requiring users to provide personal information that violates some users’ privacy).

The reason OpenPGP lacks adoption is primarily because of its unintuitive nature and its dependence on everyone to use the system in order for its individual users to benefit from it (a collective action problem). However, the limited domain in which OpenPGP would operate allows it to be enforced behind the veil of abstraction. Users will never need to know what “keys” or “signing” are – they only know that they have an identity that they can optionally secure with a password. They can also “like” other users. If their identity becomes compromised, they can choose to “destroy” it forever. Behind the scenes, keys have a six-month expiration date that is automatically renewed simply by playing the game.

On the server side, the server operates an extended whitelist based off a basic whitelist that lists primary identities that are fully trusted. Identities that are (indirectly) trusted by those primary identities may also be allowed to join the server. After the authentication succeeds, the server can reliably recognize the identity of a player, useful if the player has a specific rank or level on that server.

If all servers are whitelisted, then how can new players join? An optional centralized server can automatically grant new players trust temporarily, which allows them to join “newbie” servers and gain trust until they find themselves allowed into other servers. While this disincentivizes curiosity, this incentivizes playing in the same server as others, as well as social integration into the community. Alternatively, new players can also request trust through a side channel, such as a community chat server or forum outside of the game.

If a player has lost the trust of the community by breaking a major rule or through social engineering, other players can revoke trust just as easily as they imparted it: the OpenPGP system allows for signature revocation.

In short, an ideal implementation of public key-based infrastructure in video games would be seamless to users, eliminate the costly upkeep of a strong centralized server, and encourage regular social activity.


I woke up reading the wrong part of the Internet. Not wrong as in factually wrong – wrong as in something that does not help to stabilize my mind or bring me to a higher level of understanding of the world and that which is beyond our grasp.

The polar divide in ideology which I witness today is, from what I can see, a mostly American phenomenon spurred by intense political competition. While other countries have seen localized and sometimes violent conflict between two sides (often racially driven), never before has a war of information been seen at this scale and ferocity.

As someone who is acutely aware of all of this, it causes me great anxiety to think that every side and opinion has its fair share of criticism – as if everything in the world was wrong. Everything can be criticized, discredited, falsified, and undercut. And yet that is not a very helpful perspective, either – it just means that under the eyes of someone else, what we do is potentially “wrong.”

It’s difficult to step back and reevaluate why we believe what we believe, and why others believe what they believe, without necessarily demonizing or making a mortal enemy out of an “opposing camp.” And yet, it’s not possible to agree with everything – yet it is a necessity to respect those whom with we disagree.

This is the paradox of individualism and unity. The economic theory of comparative advantage requires us to work together to compose a society that runs for the benefit of all of its members; yet, individualism implores each of us to scrutinize the world under our own lenses, and to form our own opinions that may oppose the opinions of others.

Despite all of these concerns clouding my mind, somehow I can feel happy and accomplished. Somehow, I take pride in my accomplishments and rejoice in the successes of others. Somehow, I can just be myself and not worry.

Using JS for Qt logic

Sorry, but I’m going to have to take sides on this one: Electron sucks. At their core, they convert a program into an embedded web browser with somewhat looser restrictions on interfacing with the host system, only to make it possible for web developers to develop for the desktop without learning a new language. The benefits of Electron are heavily biased toward developers and not the end-user, which is really the target of the entire user experience.

The problem is that HTML5 makes for an abstract machine that is far too abstract, and the more layers of abstraction that sit between the software and the hardware, the more overhead there will exist from translation and just-in-time compilation. The result is a web browser that uses 120 MB of shared memory and 200 MB of resident memory per tab.

For a web browser, though, abstraction is good. It makes development easy without sacrificing user experience for what is essentially a meta-program. But for any other desktop application with a specific defined functionality, this abstraction is excessive.

This is generally why I like Qt: because it is one of the only libraries left that continues to support native desktop and embedded application development without sacrificing performance. The one problem with it, however, is that performance requires the use of C++, and most people do not know how to work C++. Moreover, it requires double the work if the program is also being targeted for the web.

There does exist QML, which removes most of the C++ and exposes a nice declarative syntax that combines both layout and logic into a single file. However, it has two significant problems: first, it adds even more cruft to the output program, and custom functionality still requires interfacing with C++ code, which can get a little difficult.

Qt’s main Achilles’ heel for a long time has been targeting the web. There are various experimental solutions available, but none of them are stable or fast enough to do the job yet.

I’ve been coming up with an idea. Qt exposes its V4 JavaScript engine (a JIT engine) for use in traditional C++ desktop programs. What I could do is the following:

  • Write JS code that both the browser and desktop clients share in common, and then make calls to some abstract interface.
  • Implement the interface uniquely for each respective platform.

For instance, the wiring for most UI code can be written in C++, which then exposes properties and calls events in JS-land. Heck, Qt already does most of that work for us with meta-objects.

How do I maintain the strong contract of an interface? You need a little strong typing, don’t you? Of course, of course – we can always slap in TypeScript, which, naturally, compiles to standards-compliant JavaScript.

The one problem is supporting promises in the JS code that gets run, which mostly relies on the capabilities of the V4 engine. I think they support promises, but it does not seem well documented. Based on this post about invoking async C++ functions asynchronously, I think that I need to write callback-based functions on the C++ side and then promisify the functions when connecting between the JS interface and the C++ side. That shouldn’t be too hard.

Note that important new features for QtJsEngine, such as ES6, were only added in Qt 5.12. This might complicate distribution for Linux (since Qt continues to lag behind in Debian and Ubuntu), but we’ll get there when we get there – it is like thinking about tripping on a rock at the summit of a mountain when we are still at home base.

Indexing the past and present

With the shutdown of GeoCities Japan, we are reaching an important point in the history of the Internet where important historical information is vanishing while being replaced with new information that is hidden away as small snippets of information in social media systems.

It is becoming increasingly apparent that a vast trove of information is simply missing from Google Search. Aggressively pushing for well-ranked sites, user-made sites with obscure but useful information are not as indexed, and their lack of maintenance leads to their loss forever.

For instance, I was only able to find MIDI versions of Pokemon Ruby and Sapphire music from a site hosted by Comcast. After the shutdown of Comcast personal sites, the information was lost to indexing forever and hidden away in the Internet Archive.

What I propose is the indexing and ranking of content in the Internet Archive and social media networks to make a powerful search engine capable of searching, past, present, and realtime data.

A large fault of the Google Search product over the years has been its dumbing down of information during the aggregation process of the Knowledge Engine that inhibits the usefulness of complex queries. If a query is too complex (i.e. contains keywords that are too far apart from each other), Google Search will attempt to ignore some keywords to fit the data that it has indexed, which only fits into particular categories or keywords. If the whole complex query is forced, though, Google Search will be unable to come up with results because it does not index or rank webpages in a way that is optimized for complex queries – not because the information does not exist.

The corpus of information is also diversifying: there is more information in e-books, chat logs, and Facebook conversations than can be found simply by crawling the hypertext. But the Google search engine has not matched this diversification, opting simply to develop the Knowledge Graph to become a primary and secondary source of information.

I think this would be a great direction a search engine such as DuckDuckGo could take to compete more directly with Google Search in a dimension other than privacy. After all, Google Search is no longer Google’s main product.

Migration event soon

I tried to connect to my own website on Friday, but the connection kept timing out. My mind raced with all of these awful thoughts: maybe some script kiddie finally breached the PHP process and decided to bring everything down. Or perhaps a disk failed on the old SCSI RAID array, and now the server is just waiting for me to connect a keyboard and press Enter all the way back at home to start the server in degraded mode.

But alas, the reality was none of it. Upon returning home on Saturday, I entered the attic and saw that the server was off, fans spinning at idle. I impatiently turn it on, the machine roaring to life once again. I supervise the whole process: everything good. Maybe there was a power outage?

Yet more wrong guesses. The culprit was my father, who decided to turn the server off (God knows in what way – did he really push the power button until it turned off?) without any express notice. Later he made an off-hand remark about how he had turned the server off, not knowing that I turned it back on again.

I want – well, now need – to migrate the server. It’s old, it’s heavy, it’s loud, and it’s expensive in power costs (costs about as much as the pool filter in kilowatt-hours per month). It’s pointless to keep it around, and probably embarrassing to explain why I still use it.

My main choices are to throw the site into my Digital Ocean droplet. I could use a Docker container but then I would have to learn how to deal with volatility and general maintenance.

There is also the option to convert everything into Jekyll; the main problem with this is that I am very unfamiliar with Ruby, and I would lose the markup characteristics of HTML (at least that’s the impression they give me). On top of that, I don’t know how to transplant my blog template into a Jekyll template (it’s not my template!) and I don’t want to give into the overused templates they offer. And then after that, where will I host the site? GitHub? There’s no reason for me to push my rants into GitHub, so the world can see what kinds of “contributions” I make every couple of weeks.

Finally, there is the option to move into a Raspberry Pi, which would grant me the benefit of continuing access to my home network, zero maintenance costs (my parents pay for power), and minimal changes to the web stack I currently use.

So immediately before leaving off for college again, at the cost of probably arriving late, I fumbled around for my Raspberry Pi and connected it to the Ethernet port in my room. I guessed the password a couple of times via SSH and then just decided to pull out the keyboard and break into it locally, so that I could remember what the password was. Oh, right, it’s those credentials. I shove the keyboard and dongle back into my duffel bag, gather my other things, and finally set out.

Now, it is my responsibility to get the RPi up to speed, as the new successor of the PowerEdge 2600.