Threading in AC

Last time I read about threading, I read that “even experts have issues with threading.” Either that’s not very encouraging, or I’m an expert for even trying.

There are a bunch of threads and event loops in AC, and the problem of how to deal with them is inevitable. Here is an executive summary of the primary threads:

  • UI thread (managed by Qt)
    • Uses asyncio event loop, but some documentation encourages me to wrap it with QEventLoop for some unspecified reason. So far, it’s working well without using QEventLoop.
    • Core runs on the same thread using a QPygletWidget, which I assume separates resources from the main UI thread since it is OpenGL.
      • Uses QTimer for calling draw and update timers
      • Uses Pyglet’s own event loop for coordinating events within the core
  • Network thread (QThread)
    • Uses asyncio event loop, but it uses asyncio futures and ad-hoc Qt signals to communicate with the UI thread.
    • Main client handler is written using asyncio.Protocol with an async/await reactor pattern, but I want to see if I can import a Node-style event emitter library, since I was going that route anyway with the code I have written.

My fear is that the network threads will all get splintered into one thread per character session, and that Pyglet instances on the UI thread will clash, resulting in me splintering all of the Pyglet instances into their own threads. If left unchecked, I could end up with a dozen threads and a dozen event loops.

Then, we have the possibility of asset worker threads for downloading. The issue with this is possible clashing when updating the SQLite local asset repository.

The only way to properly manage all of these threads is to take my time writing clean code. I cannot rush to write code that “works” because of the risk of dozens of race conditions that bubble up, not to mention the technical debt that I incur. Still, I should not need to use a single lock if I design this correctly, due to the GIL.

One year after Japan

One year since my arrival from Japan, I have learned an academic year’s worth of knowledge and grown a year more mature.

I spent vivid days enjoying lunch with others and lonely nights sulking in my dorm. I spent boring Sundays eating lunch at Kinsolving and busy days going to club meetings with people I never saw again.

As the sun changed inclination, so did my mind, it seems. Perspectives have changed. My mind melds and rearranges itself, disconnecting itself forever from the old memories of the physics lab and the traumatizingly strenuous AP exams.

As the semesters progress, people come and go. I am pulled out of one world and thrust into another, yet Japan still recalls like it happened last week. While I cannot recall all memories, the key memories still feel relatively vivid. I still feel the cotton of the yukata on my body; the refreshing chill of the small shower beside the onsen; the onsen’s disappointingly intolerable warmth; the calm, collected smile of the cashiers and service workers; the bittersweetness of having only been able to visit Akihabara once; the American pride of my Japanese teacher.

It is not certain what I will be doing on June 28, 2019, but it is certain that I will be saving money to return to Japan in 2020 for a study-abroad program.

When I noted in November that the experience will never happen again, I was correct – but this is merely to make way for even greater experiences in the unknown future.

My friend wishes to go to Japan for another week, but after viewing airline price ranges and possible dates, I politely observed that one week was simply not enough time – the insatiable longing of returning to Japan would simply repeat itself. No: I need an entire semester to evaluate the culture of Japan, its people, and what it holds in store for enjoyment. I wish not to merely cherry-pick what I wish to know, but rather to immerse myself completely in the language and culture. This should be enough to satisfy any individual.

However, I recognize that after this point, reminiscing about specific details of the trip is an obsession. I must strive to look forward and continue my studies of Japan from a holistic perspective.

(02/01/2020: June 28, 2019 was a calm, quiet day during my summer internship. I wrote a private blog post on that day, ironically reminiscing about the past.)

The S9

I got an S9 from my father as part of a deal. I did not want the phone, but he got it anyway. This is a flagship device costing almost $1,000; not exactly a small step-up from the S4.

I have been trying not to get the phone dirty with my sweaty hands, but too late for that. It appears to be a well-built and well-designed phone, although it looks prone to damage without adequate casing.

I am not particularly fond of two things: materialism, and giving away random information to any app that wants it.

I mention materialism because nothing lasts forever – the S4, at its time, was the pinnacle of technology, but we have somehow advanced even further in five years. It is difficult to imagine what a phone will look like in five more years. One must also remember that the smartphone is an instrument designed to get things done – an integrated PDA and cell phone – although these days it serves more as a game console.

There are also immense privacy risks one is taking simply by using this phone. Android has grown to such tremendous complexity that even I, a programmer, cannot fully comprehend the full design of the Android system. There are also many more apps that grab the location, now that optimizations have been made to prevent battery overuse from obtaining a fine location. And the system has grown to become so tightly integrated that practically anything can access anything (if you allow it to).

The strongest aspect of this phone is its speed – whereas Google Maps takes 6 seconds to cold-start on my S4, it loads in about 1 to 1.5 seconds on the S9; essentially instantly.

Finally, this phone allows me to place “HD Voice,” “VoLTE,” “Wi-Fi,” and “HD Video” calls. All of these things seem to be exclusive to AT&T users, with a supported SIM card, with a supported phone (i.e. not an iPhone), in a supported location, on both sides. In essence, the feature is useless for 90% of calls[citation needed]. How much longer will it take to develop and adopt a high-quality communications infrastructure that is standard across all devices and all carriers, including iPhones? What ever happened to SIP – why didn’t Cingular give everyone a SIP address back in the day? Why do I have to use a cell phone to place a call using my number? Why do we still use numbers – when will we be able to switch to an alphanumeric format like e-mail addresses?

Yes, I understand that we have to maintain compatibility with older phones and landline via the PSTN – whatever that is these days – and we also have to maintain the reliability of 911 calls.

The walled-garden stubbornness of Apple does not help much, either. Apple simply stands back and laughs at the rest of the handset manufacturers and carriers, who are struggling to agree on common communication interfaces and protocols. Will Apple help? Nope. Their business thrives on discordance and failure among the other cell phone manufacturers to develop open standards. And when they finally agree on an open standard ten years later – yoink! – Apple adopts it instantly in response to the competition.

As for other features, I found the S9’s Smart Switch feature to work perfectly: it was able to migrate everything on my S4, even the things on my SD card (I recommend removing the SD card from the original phone before initiating a transfer). It did not ask me about ADB authorization or anything like that, so I wonder how it was able to accomplish a connection to the phone simply by unlocking it.

When Android will finally have a comprehensive backup and restore feature, however, remains beyond my knowledge. This is Android’s Achilles heel by far.

Oh, and I forgot one last thing about the S9: it has a headphone jack 🙂

On Let’s Encrypt

Let’s Encrypt has been operational for about two years now, although the project originally began in 2015. Let’s Encrypt is the saving grace of HTTPS, but exactly because it is the saving grace of HTTPS is the reason that I dislike its endorsement.

Suppose that tomorrow, a security researcher discovers a critical flaw in CertBot or some other part of the Let’s Encrypt certificate issuance system, and in a week, almost every Let’s Encrypt cert is going to get tossed into the CRL, with no ability to create new certs.

They couldn’t do it. They couldn’t possibly toss 100 million certificates into the fire, because LE has already reached a point where it is too big to fail. You can’t tell your users, who expect their website encryption to come for free, “Hey, your CA got compromised, so you’re going to have to pay $20 or more for a cert from Verisign, GeoTrust, or Comodo, because there are no other free, secure CAs available. Sorry.”

And if it comes to that, two things happen:

  1. Verisign et al. gouge prices and have the biggest cert bonanza ever, because website owners have no other choices.
  2. An HTTPS blackout happens, and half of all HTTPS-enabled websites have no choice but to fall back to regular HTTP. And if this happened with a version of Chrome where insecure browsing is banned, then you can just forget about that website unless you are a website owner and choose (1).

You have to remember the situation before Let’s Encrypt: Browser vendors, most especially Google and Mozilla, were pushing as hard as they could toward eradicating HTTP and enforcing HTTPS everywhere, in light of the Edward Snowden and NSA hysteria-bordering-paranoia. However, SSL/TLS certificate options were limited at the time: existing free certificate services had been founded long before then and were commonly suggested for people who were absolutely desperate for a free certificate, but were nonetheless unpopular among CA maintainers due to rampant abuse. In other words, on the idealistic side, people believed that every site ought to have HTTPS. But on the practical side, they asked if your site really needed HTTPS if you can’t afford a certificate and you are just serving static content.

Today, those old free CAs have been abandoned by CA maintainers in favor for the one CA to rule them all: the ISRG/Let’s Encrypt CA. I mean, we’re obviously not putting all our eggs in one basket here – if something goes wrong, we still have hundreds of CAs to go by, and if an owner really needs their HTTPS, they can just shell out $100 for a cert. That’s right, if you’re a website owner who cares more about their website than the average Stack Overflow user, you should really consider shelling out money, even though we’re sponsoring a cert service that is absolutely free! Oh, and if something goes wrong, you get what you paid for, right? My logic is totally sound!

Let me reiterate: in the case of a future catastrophe, assuming that we are enough time into the future that browsers have placed so much trust in the HTTPS infrastructure that they now put prevent casual connections to insecure HTTP websites, there are two answers based on how much money you have:

  1. You’re f**ed, along with millions of website owners. More news at 11. Maybe the folks at Ars Technica can tell you what to do. Except they’re also too busy panicking about their personal websites.
  2. Buy a cert before they raise their pri– oh, too late, they’re $50 a pop now.

So, I think the problem at hand here is the philosophy behind trust. Trust is such a complicated mechanic in human nature that it cannot be easily automated by a computer. When we make a deal on Craigslist, how do we know we’re not going to end up getting kidnapped by the guy we’re supposed to be meeting with? Is the only reason a bureaucracy trusts me as an individual because I can give them an identification card provided by the government? But how can I, as an individual, trust the bureaucracy or the government? Only because other people trust them, or people trust them with their money?

How does this tie into the Internet? How can I trust PKI, the trust system itself? What happens if I tie a transactional system – specifically the likes of Ethereum – into a web-of-trust system such as PGP? What happens if I tell people, “vote who you trust with your wallets“? What is a trustable identity in a computer network? What remedies does an entity have if their identity is stolen?

On Windows

I have held off on making a post like this for a long time now, but I think it is now the time to do so.

I thought things would improve with Windows, but for the past five years (has time really gone so quickly?), Microsoft has not done anything with their power users, effectively leaving them in the dark to “modernize” their operating system for small devices (netbooks and tablets).

Microsoft knows so well that power users are leaving in droves to Linux, so they developed the Windows Subsystem for Linux – essentially a remake of Interix – to allow people to “run Ubuntu” on their machines all while keeping the familiar taskbar on their desktops and without having to tread through the territory of repartitioning, package management, and drivers. By taking advantage of distros’ terse and hard-to-read documentation as an “advantage” for staying on Windows, Microsoft has kept the uninformed lured into Windows 10.

Let’s remember what Windows used to be primarily for: office applications. Professionals and businesspeople still use Windows every day to get their work done. They were so invested in the system, in fact, that some of them took to learn keyboard shortcuts and other nooks and crannies of the system to do work even faster (or if using a mouse was not comfortable).

Today, Windows is used for three reasons:

  1. Microsoft Office dominates the market for productivity.
  2. Windows comes with almost every personal computer that isn’t a Mac.
  3. After MS-DOS, Windows was the go-to platform for PC gaming, and it still is. As such, gamers are reluctant to move anywhere else, lest their performance decrease.

The weight of Win32’s legacy features is too heavy of a burden to keep Windows moving forward as it is. Windows 10 has a multi-generational UI: modern UI (e.g. PC settings menu) from Windows 8 and 10, Aero UI (e.g. Control Panel) from Windows Vista and 7, Luna icons (e.g. Microsoft IME) from Windows XP, and UI that hasn’t changed since the very beginning (e.g. dial-up, private character editor) from Windows 98 and 2000.

The problem is that many business users still depend on Win32 programs. Microsoft is in an extremely tight spot: they must push for new software, all the while keeping friction as low as possible during the transition process.

But if Microsoft is going to eradicate Win32, why bother developing for UWP? Why not take the time now to develop cross-platform applications? Hence why companies that care – that is, companies that do not sell their 15-year-old software as if it were “new” in 2018 – are targeting either the web or Qt (which is very easy to port). Other programs that require somewhat tighter integration with Windows are very likely to use .NET, which means pulling out C#.

Here are some reasons I still use Windows on my desktop:

  1. I am accustomed to the keyboard shortcuts. (i.e. sunk cost)
  2. Microsoft Office.
  3. I can pull out a VM if I need Linux.

However, these reasons are becoming less relevant: I am unfamiliar with Windows 10 (due to its inconsistent UI), and Windows 7 is losing support soon. Moreover, a reliable method of installing Office through Wine is being developed, and new technologies that allow hardware pass-through, such as VT-d, have caused gaming performance on a VM to match almost that of natively running Windows.

I am also tired of the support offered for Windows: those who actually know what they are talking about are called “MVPs,” and everyone else simply seems to throw canned messages for support requests. For instance, if you look up “restore point long time” on Google, the first result is a Quora question called, “Why does system restore point take so long on Windows 10?” with some nonsensical answers:

  • It’s very fast, but restoring it can take a little while. Maybe you are referring to a system backup. Download this backup software and it should be super fast.
  • Just read the article on How-To Geek and it should cover everything. Two hours is worth it to get your computer working again. And if a restore point doesn’t work, just try another one.
  • Microsoft optimizes their DLLs for speed. Also, restore points are disabled by default.
  • This is a terrible feature.
  • Here is how to create a restore point. Go to the Start menu…
  • The “multiple levels of code” is just so much more advanced in Windows 10.

None of them answer the question: why does creating a system restore point take so long?

You can probably find similar blabber for why Windows Installer takes so long, or some technical feature of Windows.

These days, I don’t really think many people know how Windows actually works. How in the world am I going to use an operating system that nobody knows how it actually works?

In comparison, any other well-supported Linux distribution has people so tough on support that they will yell at you to get all kinds of logs. With Windows, nobody really knows how to help you; with Linux, nobody wants to bother helping such a lowly, illiterate n00b as you.

As for Wine, if Microsoft did not financially benefit from it, Microsoft would have taken down the project before it ever even took off. My suspicion is that once Wine is at a stable state, Microsoft will acquire (or fork) the project and use it as a platform for legacy applications, once they have eradicated Win32 from their new Windows.

All in all, Windows has served me very well for the past years, but I have grown out of it. All the while, I wish to stay away from the holy wars fought daily in the open-source world, most especially the war between GPL and BSD/MIT, although they do seem to be getting along these days. The problems arise when MIT code is about to get linked with GPL code, and that’s when developers have to say “all right, I can relicense for you,” or, “absolutely not, read the GPL and do not use my software if you do not agree with it.”

 

The “libre” paradox

There is a great amount of discordance in the worldwide community at large regarding what kinds of software should be made free, open-source, or commercial. Even I, who am not a developer of any prominent software, have had to tackle this question myself, especially after the Aseprite fiasco regarding its conversion from commercial GPLv2 to commercial closed-source.

My empirical findings about software production models is that while commercial software can achieve results quickly and efficiently, open-source software runs on ideas and thus tend to achieve results with greater quality. Developers might be hired to write a specific program in six months, yet a developer has all the time in the world to think about the design of a personal project before even putting down a line of code. Moreover, academics (assuming, of course, that academics are the ones who work on FOSS projects, since they are too busy for a full-time job, but are keen to write code for societal good) have an affinity for peer review, encouraging only the best development and security practices, under risk of scrutiny otherwise.

It is of no surprise, then, why companies tend to cherry-pick code and design from FOSS projects to fuel something slightly better.

When a new idea is introduced for the first time, it is competition and money that drive results. When Bell Labs et al. dominated computing research for a long time, and the threatening Soviet Union begrudged the United States government to fund the research of NASA, these are prime examples of manifestations of driving factors for research and innovation.

But neither Bell Labs and NASA ever sold products to consumers. Instead, other companies were founded to fill this gap – not to create something radically new (often, when this occurs, they miserably fail or dramatically succeed), but rather to simply take the next step. The research has already been complete – just put it in a box, along with an instructions manual, and sell it to consumers. There’s nothing like it on the market, so it’s a perfect choice for consumers. Rake in the cash. Corner the market. And soon, a new company will form itself to take yet another baby step in innovation, and that one will be fruitful too.

When the innovation has become so clear and obvious to the public that it can be learned by undergraduates or any interested student, it is then time to charitably introduce the innovation to others. The modern computer has existed for a long time, yet Eben Upton and the Raspberry Pi Foundation took the “small” step of putting a SoC on a small board and selling it for $35. At the time, I don’t think it would have been easy to find a technologically up-to-date, general-purpose computing device at that price point and form factor. But because the Raspberry Pi Foundation did it, now many businesses exist for the sole purpose of manufacturing and selling low-cost single-board computers. As a result of this work of charity, computers are now easily accessible to all. What’s more, students can and must take courses covering the architecture of the modern computer, and some students are even tasked with constructing one from scratch.

Likewise, once an open-source project is done on a particular topic, that particular topic is essentially “done“. There are not many businesses out there that sell consumer operating systems anymore; if people seek a free operating system, there’s GNU. It’s done; why look further? Any improvements needed are a code contribution away, solving the problem for thousands of others as well. Why should companies strive to produce new modeling software if they must compete with programs like Blender and existing commercial software such as Maya?

My observation is that open-source software is the endgame. It is impossible for a commercial software with the same features as an open-source program to compete with each other; the open-source program will win consistently. Conversely, commercial software stems from open-source algorithms waiting to be applied, be it TensorFlow or Opus.

Basically, it makes sense to start a company to churn out commercial software if one is willing to apply existing research to consumer applications (take small steps); join a larger company to rapidly develop and deploy something innovative; or join academia to write about theory in its most idealistic form.

Under these observations, startup businesses fail because they attempt to innovate too much too quickly. The job is not to innovate immensely all at once – the job is to found a business under a basic, yet promising idea (the seed), produce results, and then continue taking small, gradual steps toward innovation. The rate of innovation will be unquestionable by investors – if you survive for two years, putting your new features and products at a healthy pace, then people will naturally predict the same rate for the coming future, and be more willing to invest.

Yet you would never find enough resources to make a successful startup for, say, building giant mechs or launching payloads into space. There’s just too much research to be done, and the many people who are capable (and in demand) to perform this research need coin to sustain themselves. In contrast, the military can pour any amount of money they wish to a particular project, and they could have a big walking mech that looks like the one from Avatar in less than 36 months. (I’d wager the military has already been working on this as a top-secret project.)

But do you see how much we have departed from the idea of “libre?” My conclusion is this: businesses do things quickly, while charitable people do things right. Once the research has been completed and the applications have been pitched and sold, it is then time to transition and spread the innovation to the general public. That is the cycle of innovation.

Personal protection

You may know my blog well for my rants, but if you have been or are planning to look into my personal life, you should know that I have hidden these posts. They have provided great insight into myself, but being public on the open Internet, they can also be used against me in unpredictable ways.

They explain in great detail, for instance, why I seem to lack the motivation to work on my projects, what effect this incurs on me, and what grim outlooks I have had on life in the past two years – but I do believe there are some people out there who are willing to argue nonetheless about my personal life, arguments which take mental energy and time to address.

I may open these posts in the future, but for now, a little bit of privacy might be appreciated.

Soundscapes

I feel like publishing what songs followed me around in my head while I was in Japan, so I’ll list them here:

Kyoto and rural areas: Xyce – A summer afternoon
Crossing over Rainbow Bridge: Mirror’s Edge menu theme
Tokyo: BĂ´a – Duvet
Plane taking off back to Japan: SAVESTATES – When They Find You, Don’t Tell Them You’re Dead
After returning to Japan: Zabutom – My alien shoes

I think they are fairly dumb song choices, but I really could not get them off my head, so if you want to add to the atmosphere while reading the trip account of Japan, you can play the corresponding song.

Not sure why anyone wants to know this, though.

The problem of image formats

In the making of Animated Chatroom, I’ve been encountering a major snag: none of the popular image formats seem to fit my needs. I need an alpha channel (that isn’t 1-bit!), animation support, and good compression. Here are the candidates:

  • GIF – used since the 90s. Good compression, excellent animation support, but palletized and 1-bit transparency. I can’t use it for the complex 3D sprites, though. Dithering hacks are still used to this day to try to mask the limitations of GIF.
  • APNG – It’s meant for transparent animations, but has poor support by most libraries. Not even standardized; some browsers may be looking to remove it (already?). Many implementations implement it poorly, by stacking each PNG frame next to the other, without compressing the blocks shared by both frames, leading to an inflated file size (often more than GIF).
  • WebM – Alpha support was thoroughly devised in VP8 via the YUVA420P pixel format, yet left as an afterthought in the conception of VP9. Nevertheless, VP8 has excellent compression, but again, the consideration of supporting YUVA420P is cast aside in many implementations of FFmpeg decoders, leading to the alpha layer getting silently converted to a black or white matte.
  • PNG image sequence – Brute force solution. No inter-frame compression, leading to intolerable sizes.
  • MNG – Are there even up-to-date implementations of MNG? Does anyone even use MNG in 2018? I thought so.
  • WebP – Seems decent, but inferior compression and lossy by default.
  • FLIF – Are we really ready to enter into “the future”? While FLIF may fit the bill for literally all of my needs, there is no stable support to be found anywhere, except in the form of a native library. I need support for Python if I am to get anywhere.
  • My own format – Why in the world would I want to do this? I would rather put LZ4 on APNG than reinvent the wheel.

For now, I don’t have much of a choice for animated image support except GIF, until certain bugs are fixed in pyglet that prevent alpha support when decoding via FFmpeg.

Gearing up

It’s time to start work on Animated Chatroom. It is a monumental project; the largest project to date that I have ever desired to undertake.

My resources are somewhat scarce, but it could be worse. The two resources which I am in great scarcity of are developers (human resources) and energy (something which tends to be inversely proportional to time). The developers I seek are either not competent enough to produce modular code, or they live in a very different time zone that complicates any coordinative effort. My energy is drained from playing with my brother or doing real-life tasks which I have been postponing for too long, such as cleaning some things up.

There is another question that compounds a desire to do everything other than work on Animated Chatroom: where do I even start?

Well, let’s see what Torvalds has to say about his success:

Nobody should start to undertake a large project. You start with a small trivial project, and you should never expect it to get large. If you do, you’ll just overdesign and generally think it is more important than it likely is at that stage. Or worse, you might be scared away by the sheer size of the work you envision. So start small, and think about the details. Don’t think about some big picture and fancy design. If it doesn’t solve some fairly immediate need, it’s almost certainly over-designed. And don’t expect people to jump in and help you. That’s not how these things work. You need to get something half-way useful first, and then others will say “hey, that almost works for me”, and they’ll get involved in the project.

Okay, Benevolent Dictator Linus…

You start with a small trivial project, and you should never expect it to get large. If you do, you’ll just overdesign and generally think it is more important than it likely is at that stage.

All right, so we started with a small trivial project. It was called Attorney Online 2. It was good. And then it tanked because of poor design. I want Animated Chatroom to not go through that pain again.

Or worse, you might be scared away by the sheer size of the work you envision.

Which I am.All right, so what features do we not need? Let’s cut nodes until we get something less overwhelming.

Better. That’s almost the bare minimum that I need.

In list format:

  1. Core animation engine.
  2. Asset loader.
  3. Basic network.
  4. Basic UI.

That’s all, I guess I don’t care about anything else right now. So let’s cut it down even further.

Okay. So version 0.1 will barely have a UI. It’s just figuring out how stuff should work.

It’s clear that VNVM is at the center of this entire project. If I can design VNVM correctly, then this project has a chance; otherwise, a poor execution will lead to a shaky foundation.

The Visual Novel Virtual Machine

What is the purpose of the Visual Novel Virtual Machine project? The purpose is to bring sequences of animations and dialogue sequences, the bread and butter of visual novels, to a portable environment. From reverse engineering performed by others, it turns out that major visual novels also use a bytecode to control dialogue and game events. In the VNVM world, this is called VNASM (Visual Novel Assembly).

Within characters, emotes are simply small bits of VNASM, which are then called by the parent game (which also runs in the VNVM). Recording a game is just a matter of storing what code was emitted and when. The point is that essentially all VNASM is compiled or emitted by a front-end, rendering it unnecessary to understand VNASM to write a character. (But, it would kinda be nice to be able to inline it, wouldn’t it?)

This makes VNVM satisfactory for both scripted and network environments. In a network situation, where the execution is left open, there is a simple wait loop that awaits the next instruction from the network buffer. New clients simply retrieve the full execution state of the VNVM to get going. The server controls what kinds of commands can be sent to it; most likely, an in-character chat will look something like this as a request made to the server:

{ emote: "thinking", message: "Hmm... \p{2} I wonder if I can get this to work right..." }

The \p marker denotes a pause of 2 seconds, which the server parses out to emit a delay of 2 seconds (of course, limiting the number to a reasonable amount). The server then pushes the reference to the character who wants to talk, as well as the message to be said, and calls char_0b7aa8::thinking where 0b7aa8 is the character’s ID. This denotes that the named subroutine thinking is located in a segment of VNVM code named char_0b7aa8.

More to follow later.