Author Archives: oldbyte10

Does pyqtdeploy even work?

I know nobody is going to read this terrible blog to find this, but still, I’m moderately frustrated in trying to find a decent workflow to deploy a small, single-executable, Python-based Qt application.

Even on Windows using C++, it was not so easy to build statically until I found the Qt static libraries on the MinGW/MSYS2 repository – then building statically became a magical experience.

So far, the only deployment tools that promise to deploy a Python Qt program as a single executable are PyInstaller and pyqtdeploy.

PyInstaller works by freezing everything, creating an archive inside the executable with the minimum number of modules necessary to run, invoking UPX on these modules, and then when the program is run, it extracts everything to a temporary folder and runs the actual program from there. As such, startup times seem to be around 3-5 seconds, and the size of the executable is about 30 MB.

pyqtdeploy works by freezing your code, turning it into a Qt project with some pyqtdeploy-specific code, and then compiling that code as if it were a C++-style project, so you could compile a static version of Qt against this generated code.

But in order to use pyqtdeploy, you need to have the libraries at hand for linking:

LIBS += -lQtCore
LIBS += -lQtGui
LIBS += -lpython36
LIBS += -lsip

There’s no way around it – you must build Python and the other dependencies from scratch, and this could take a long time.

I have also encountered strange errors such as SOLE_AUTHENTICATION_SERVICE being undefined in the Windows API headers.

I mean, I suppose pyqtdeploy works, but is this even a trajectory worth going? What would be the pre-UPX size of such an executable – 25 MB, perhaps? That would put it on par with the AO executable.

I might as well write the launcher in C++, or switch to Tkinter.

A humanitarian mission for mesh networking

After Hurricane Maria, I was invited to a Slack group in Puerto Rico to offer my programming expertise for anyone who needed it. After beginning to comprehend the magnitude of the communications problem, I scoured for ways to set up long-distance mesh networking – no, not mobile apps like FireChat that rely on short-distance Wi-Fi or Bluetooth to establish limited local communications – rather, ways to post and find information across the entire island, with relays that could connect through the limited submarine cables to the outside Internet as a gateway for government agencies and worried relatives.

During the three weeks in my interest of this project (but powerlessness in doing anything, as I was taking classes), I investigated present technologies (such as 802.11s), as well as capabilities of router firmware, theoretical ranges of high-gain antennas, and other existing projects.

I saw Project Loon, but never expected much of it. The project must have taken a great deal of effort to take off, but unfortunately, it seemed to have a high cost with little return. Essentially, balloons were sent from some point on Earth and then led by high-altitude winds to cross Puerto Rico for a few hours, eventually to land at some location in the United States. Despite this effort, I found very few reports of actual reception from a Project Loon balloon.

Meanwhile, someone in the mesh networking Slack channel informed me that they were working with a professor at A&M to implement a mesh network from a paper that was already written. While I ultimately never saw the implementation of this mesh network, I felt put down by my naivete, but accepting that my plans were undeveloped and unexecutable, I moved on with the semester. Surely, mobile carriers must have had all hands on deck to reestablish cell phone coverage as quickly as possible, which is certainly the best long-term solution to the issue.

However, many places other than Puerto Rico remain in dire need of communications infrastructure, in towns and villages that for-profit carriers have no interest in placing coverage in. Moreover, there are islands at risk of becoming incommunicable in case of a hurricane.

I am looking to start a humanitarian mission to set up a mesh network. I find that there are three major characteristics to a theoretical successful mesh network: resilience, reach, and time to deploy.

A mesh network that is not resilient is flimsy: one failed node, perhaps bad weather or even vandalism, should not render all of the other nodes useless. Rather, the network should continue operating internally until connection can be reestablished with other nodes, or the situation can be avoided entirely by providing connections with other node, or even wormholing across the mesh network via cellular data.

A mesh network that does not reach does not have any users to bear load from, and thus becomes a functionally useless work of modern art. No, your users will not install an app from the app store – besides, with what Internet? – or buy a $50 pen-sized repeater from you. They want to get close to a hotspot – perhaps a few blocks away in Ponce – and let relatives all the way in Rio Piedras know that they are safe. And to maximize reach, of course, you need high-gain antennas to make 10-to-15-mile hops between backbone nodes that carry most of the traffic, which then distribute the traffic to subsidiary nodes down near town centers using omnidirectional antennas.

A mesh network that takes too long to deploy will not find much use in times of disaster. Cellular companies work quickly to restore coverage – a mesh network simply cannot beat cell coverage once it has been reestablished. First responders will bring satellite phones, and chances of switching to an entirely new communication system will dwindle as the weeks pass as volunteer workflows solidify.

How do I wish to achieve these mesh networking goals?

  • Resilience – use Elixir and Erlang/OTP to build fault-tolerant systems and web servers that can shape traffic to accommodate both real-time and non-real-time demands. For instance, there could be both voice and text coming through a narrow link, which could be as low as 20 Mbps. There may also be an indirect route to the Internet, but there may not be enough bandwidth to allow all users to be routed to the Internet. Moreover, decentralized data structures exist that can be split and merged, in case new nodes are added or nodes become split in an emergency, with possible delayed communication between nodes due to an unreliable data link.
  • Reach – allow users to reach the access point via conventional Wi-Fi or cellular radio, and connect via web browser. Nodes use omnidirectional antennas for distribution and high-gain antennas to form a backbone that can span dozens of miles.
  • Time to deploy – use off-the-shelf consumer hardware and allow flexibility in choice of hardware. Make the specs open for anyone to build a node if desired. Pipeline the production of such nodes with a price tag of less than $400 per node.

I imagine that the mesh network will predominantly serve a disaster-oriented social network with various key features:

  • Safety check – when and where did this person report that they were okay or needed assistance?
  • Posts – both public and private
  • Maps – locations that are open for business, distress calls, closed roads, etc.
  • Real-time chat (text and voice)
  • Full interaction with the outside world via Internet relays
  • Limited routing to specific websites on the open Internet, if available (e.g. Wikipedia)

One issue with this idea, I suppose, is the prerequisite of having a fully decentralized social network, which has yet to be developed. But we cannot wait until the next big disaster to begin creating resilient mesh networks. We must begin experimenting very soon.

Threading in AC

Last time I read about threading, I read that “even experts have issues with threading.” Either that’s not very encouraging, or I’m an expert for even trying.

There are a bunch of threads and event loops in AC, and the problem of how to deal with them is inevitable. Here is an executive summary of the primary threads:

  • UI thread (managed by Qt)
    • Uses asyncio event loop, but some documentation encourages me to wrap it with QEventLoop for some unspecified reason. So far, it’s working well without using QEventLoop.
    • Core runs on the same thread using a QPygletWidget, which I assume separates resources from the main UI thread since it is OpenGL.
      • Uses QTimer for calling draw and update timers
      • Uses Pyglet’s own event loop for coordinating events within the core
  • Network thread (QThread)
    • Uses asyncio event loop, but it uses asyncio futures and ad-hoc Qt signals to communicate with the UI thread.
    • Main client handler is written using asyncio.Protocol with an async/await reactor pattern, but I want to see if I can import a Node-style event emitter library, since I was going that route anyway with the code I have written.

My fear is that the network threads will all get splintered into one thread per character session, and that Pyglet instances on the UI thread will clash, resulting in me splintering all of the Pyglet instances into their own threads. If left unchecked, I could end up with a dozen threads and a dozen event loops.

Then, we have the possibility of asset worker threads for downloading. The issue with this is possible clashing when updating the SQLite local asset repository.

The only way to properly manage all of these threads is to take my time writing clean code. I cannot rush to write code that “works” because of the risk of dozens of race conditions that bubble up, not to mention the technical debt that I incur. Still, I should not need to use a single lock if I design this correctly, due to the GIL.

One year after Japan

One year since my arrival from Japan, I have learned an academic year’s worth of knowledge and grown a year more mature.

I spent vivid days enjoying lunch with others and lonely nights sulking in my dorm. I spent boring Sundays eating lunch at Kinsolving and busy days going to club meetings with people I never saw again.

As the sun changed inclination, so did my mind, it seems. Perspectives have changed. My mind melds and rearranges itself, disconnecting itself forever from the old memories of the physics lab and the traumatizingly strenuous AP exams.

As the semesters progress, people come and go. I am pulled out of one world and thrust into another, yet Japan still recalls like it happened last week. While I cannot recall all memories, the key memories still feel relatively vivid. I still feel the cotton of the yukata on my body; the refreshing chill of the small shower beside the onsen; the onsen’s disappointingly intolerable warmth; the calm, collected smile of the cashiers and service workers; the bittersweetness of having only been able to visit Akihabara once; the American pride of my Japanese teacher.

It is not certain what I will be doing on June 28, 2019, but it is certain that I will be saving money to return to Japan in 2020 for a study-abroad program.

When I noted in November that the experience will never happen again, I was correct – but this is merely to make way for even greater experiences in the unknown future.

My friend wishes to go to Japan for another week, but after viewing airline price ranges and possible dates, I politely observed that one week was simply not enough time – the insatiable longing of returning to Japan would simply repeat itself. No: I need an entire semester to evaluate the culture of Japan, its people, and what it holds in store for enjoyment. I wish not to merely cherry-pick what I wish to know, but rather to immerse myself completely in the language and culture. This should be enough to satisfy any individual.

However, I recognize that after this point, reminiscing about specific details of the trip is an obsession. I must strive to look forward and continue my studies of Japan from a holistic perspective.

The S9

I got an S9 from my father as part of a deal. I did not want the phone, but he got it anyway. This is a flagship device costing almost $1,000; not exactly a small step-up from the S4.

I have been trying not to get the phone dirty with my sweaty hands, but too late for that. It appears to be a well-built and well-designed phone, although it looks prone to damage without adequate casing.

I am not particularly fond of two things: materialism, and giving away random information to any app that wants it.

I mention materialism because nothing lasts forever – the S4, at its time, was the pinnacle of technology, but we have somehow advanced even further in five years. It is difficult to imagine what a phone will look like in five more years. One must also remember that the smartphone is an instrument designed to get things done – an integrated PDA and cell phone – although these days it serves more as a game console.

There are also immense privacy risks one is taking simply by using this phone. Android has grown to such tremendous complexity that even I, a programmer, cannot fully comprehend the full design of the Android system. There are also many more apps that grab the location, now that optimizations have been made to prevent battery overuse from obtaining a fine location. And the system has grown to become so tightly integrated that practically anything can access anything (if you allow it to).

The strongest aspect of this phone is its speed – whereas Google Maps takes 6 seconds to cold-start on my S4, it loads in about 1 to 1.5 seconds on the S9; essentially instantly.

Finally, this phone allows me to place “HD Voice,” “VoLTE,” “Wi-Fi,” and “HD Video” calls. All of these things seem to be exclusive to AT&T users, with a supported SIM card, with a supported phone (i.e. not an iPhone), in a supported location, on both sides. In essence, the feature is useless for 90% of calls[citation needed]. How much longer will it take to develop and adopt a high-quality communications infrastructure that is standard across all devices and all carriers, including iPhones? What ever happened to SIP – why didn’t Cingular give everyone a SIP address back in the day? Why do I have to use a cell phone to place a call using my number? Why do we still use numbers – when will we be able to switch to an alphanumeric format like e-mail addresses?

Yes, I understand that we have to maintain compatibility with older phones and landline via the PSTN – whatever that is these days – and we also have to maintain the reliability of 911 calls.

The walled-garden stubbornness of Apple does not help much, either. Apple simply stands back and laughs at the rest of the handset manufacturers and carriers, who are struggling to agree on common communication interfaces and protocols. Will Apple help? Nope. Their business thrives on discordance and failure among the other cell phone manufacturers to develop open standards. And when they finally agree on an open standard ten years later – yoink! – Apple adopts it instantly in response to the competition.

As for other features, I found the S9’s Smart Switch feature to work perfectly: it was able to migrate everything on my S4, even the things on my SD card (I recommend removing the SD card from the original phone before initiating a transfer). It did not ask me about ADB authorization or anything like that, so I wonder how it was able to accomplish a connection to the phone simply by unlocking it.

When Android will finally have a comprehensive backup and restore feature, however, remains beyond my knowledge. This is Android’s Achilles heel by far.

Oh, and I forgot one last thing about the S9: it has a headphone jack 🙂

On Let’s Encrypt

Let’s Encrypt has been operational for about two years now, although the project originally began in 2015. Let’s Encrypt is the saving grace of HTTPS, but exactly because it is the saving grace of HTTPS is the reason that I dislike its endorsement.

Suppose that tomorrow, a security researcher discovers a critical flaw in CertBot or some other part of the Let’s Encrypt certificate issuance system, and in a week, almost every Let’s Encrypt cert is going to get tossed into the CRL, with no ability to create new certs.

They couldn’t do it. They couldn’t possibly toss 100 million certificates into the fire, because LE has already reached a point where it is too big to fail. You can’t tell your users, who expect their website encryption to come for free, “Hey, your CA got compromised, so you’re going to have to pay $20 or more for a cert from Verisign, GeoTrust, or Comodo, because there are no other free, secure CAs available. Sorry.”

And if it comes to that, two things happen:

  1. Verisign et al. gouge prices and have the biggest cert bonanza ever, because website owners have no other choices.
  2. An HTTPS blackout happens, and half of all HTTPS-enabled websites have no choice but to fall back to regular HTTP. And if this happened with a version of Chrome where insecure browsing is banned, then you can just forget about that website unless you are a website owner and choose (1).

You have to remember the situation before Let’s Encrypt: Browser vendors, most especially Google and Mozilla, were pushing as hard as they could toward eradicating HTTP and enforcing HTTPS everywhere, in light of the Edward Snowden and NSA hysteria-bordering-paranoia. However, SSL/TLS certificate options were limited at the time: existing free certificate services had been founded long before then and were commonly suggested for people who were absolutely desperate for a free certificate, but were nonetheless unpopular among CA maintainers due to rampant abuse. In other words, on the idealistic side, people believed that every site ought to have HTTPS. But on the practical side, they asked if your site really needed HTTPS if you can’t afford a certificate and you are just serving static content.

Today, those old free CAs have been abandoned by CA maintainers in favor for the one CA to rule them all: the ISRG/Let’s Encrypt CA. I mean, we’re obviously not putting all our eggs in one basket here – if something goes wrong, we still have hundreds of CAs to go by, and if an owner really needs their HTTPS, they can just shell out $100 for a cert. That’s right, if you’re a website owner who cares more about their website than the average Stack Overflow user, you should really consider shelling out money, even though we’re sponsoring a cert service that is absolutely free! Oh, and if something goes wrong, you get what you paid for, right? My logic is totally sound!

Let me reiterate: in the case of a future catastrophe, assuming that we are enough time into the future that browsers have placed so much trust in the HTTPS infrastructure that they now put prevent casual connections to insecure HTTP websites, there are two answers based on how much money you have:

  1. You’re f**ed, along with millions of website owners. More news at 11. Maybe the folks at Ars Technica can tell you what to do. Except they’re also too busy panicking about their personal websites.
  2. Buy a cert before they raise their pri– oh, too late, they’re $50 a pop now.

So, I think the problem at hand here is the philosophy behind trust. Trust is such a complicated mechanic in human nature that it cannot be easily automated by a computer. When we make a deal on Craigslist, how do we know we’re not going to end up getting kidnapped by the guy we’re supposed to be meeting with? Is the only reason a bureaucracy trusts me as an individual because I can give them an identification card provided by the government? But how can I, as an individual, trust the bureaucracy or the government? Only because other people trust them, or people trust them with their money?

How does this tie into the Internet? How can I trust PKI, the trust system itself? What happens if I tie a transactional system – specifically the likes of Ethereum – into a web-of-trust system such as PGP? What happens if I tell people, “vote who you trust with your wallets“? What is a trustable identity in a computer network? What remedies does an entity have if their identity is stolen?

On Windows

I have held off on making a post like this for a long time now, but I think it is now the time to do so.

I thought things would improve with Windows, but for the past five years (has time really gone so quickly?), Microsoft has not done anything with their power users, effectively leaving them in the dark to “modernize” their operating system for small devices (netbooks and tablets).

Microsoft knows so well that power users are leaving in droves to Linux, so they developed the Windows Subsystem for Linux – essentially a remake of Interix – to allow people to “run Ubuntu” on their machines all while keeping the familiar taskbar on their desktops and without having to tread through the territory of repartitioning, package management, and drivers. By taking advantage of distros’ terse and hard-to-read documentation as an “advantage” for staying on Windows, Microsoft has kept the uninformed lured into Windows 10.

Let’s remember what Windows used to be primarily for: office applications. Professionals and businesspeople still use Windows every day to get their work done. They were so invested in the system, in fact, that some of them took to learn keyboard shortcuts and other nooks and crannies of the system to do work even faster (or if using a mouse was not comfortable).

Today, Windows is used for three reasons:

  1. Microsoft Office dominates the market for productivity.
  2. Windows comes with almost every personal computer that isn’t a Mac.
  3. After MS-DOS, Windows was the go-to platform for PC gaming, and it still is. As such, gamers are reluctant to move anywhere else, lest their performance decrease.

The weight of Win32’s legacy features is too heavy of a burden to keep Windows moving forward as it is. Windows 10 has a multi-generational UI: modern UI (e.g. PC settings menu) from Windows 8 and 10, Aero UI (e.g. Control Panel) from Windows Vista and 7, Luna icons (e.g. Microsoft IME) from Windows XP, and UI that hasn’t changed since the very beginning (e.g. dial-up, private character editor) from Windows 98 and 2000.

The problem is that many business users still depend on Win32 programs. Microsoft is in an extremely tight spot: they must push for new software, all the while keeping friction as low as possible during the transition process.

But if Microsoft is going to eradicate Win32, why bother developing for UWP? Why not take the time now to develop cross-platform applications? Hence why companies that care – that is, companies that do not sell their 15-year-old software as if it were “new” in 2018 – are targeting either the web or Qt (which is very easy to port). Other programs that require somewhat tighter integration with Windows are very likely to use .NET, which means pulling out C#.

Here are some reasons I still use Windows on my desktop:

  1. I am accustomed to the keyboard shortcuts. (i.e. sunk cost)
  2. Microsoft Office.
  3. I can pull out a VM if I need Linux.

However, these reasons are becoming less relevant: I am unfamiliar with Windows 10 (due to its inconsistent UI), and Windows 7 is losing support soon. Moreover, a reliable method of installing Office through Wine is being developed, and new technologies that allow hardware pass-through, such as VT-d, have caused gaming performance on a VM to match almost that of natively running Windows.

I am also tired of the support offered for Windows: those who actually know what they are talking about are called “MVPs,” and everyone else simply seems to throw canned messages for support requests. For instance, if you look up “restore point long time” on Google, the first result is a Quora question called, “Why does system restore point take so long on Windows 10?” with some nonsensical answers:

  • It’s very fast, but restoring it can take a little while. Maybe you are referring to a system backup. Download this backup software and it should be super fast.
  • Just read the article on How-To Geek and it should cover everything. Two hours is worth it to get your computer working again. And if a restore point doesn’t work, just try another one.
  • Microsoft optimizes their DLLs for speed. Also, restore points are disabled by default.
  • This is a terrible feature.
  • Here is how to create a restore point. Go to the Start menu…
  • The “multiple levels of code” is just so much more advanced in Windows 10.

None of them answer the question: why does creating a system restore point take so long?

You can probably find similar blabber for why Windows Installer takes so long, or some technical feature of Windows.

These days, I don’t really think many people know how Windows actually works. How in the world am I going to use an operating system that nobody knows how it actually works?

In comparison, any other well-supported Linux distribution has people so tough on support that they will yell at you to get all kinds of logs. With Windows, nobody really knows how to help you; with Linux, nobody wants to bother helping such a lowly, illiterate n00b as you.

As for Wine, if Microsoft did not financially benefit from it, Microsoft would have taken down the project before it ever even took off. My suspicion is that once Wine is at a stable state, Microsoft will acquire (or fork) the project and use it as a platform for legacy applications, once they have eradicated Win32 from their new Windows.

All in all, Windows has served me very well for the past years, but I have grown out of it. All the while, I wish to stay away from the holy wars fought daily in the open-source world, most especially the war between GPL and BSD/MIT, although they do seem to be getting along these days. The problems arise when MIT code is about to get linked with GPL code, and that’s when developers have to say “all right, I can relicense for you,” or, “absolutely not, read the GPL and do not use my software if you do not agree with it.”


The “libre” paradox

There is a great amount of discordance in the worldwide community at large regarding what kinds of software should be made free, open-source, or commercial. Even I, who am not a developer of any prominent software, have had to tackle this question myself, especially after the Aseprite fiasco regarding its conversion from commercial GPLv2 to commercial closed-source.

My empirical findings about software production models is that while commercial software can achieve results quickly and efficiently, open-source software runs on ideas and thus tend to achieve results with greater quality. Developers might be hired to write a specific program in six months, yet a developer has all the time in the world to think about the design of a personal project before even putting down a line of code. Moreover, academics (assuming, of course, that academics are the ones who work on FOSS projects, since they are too busy for a full-time job, but are keen to write code for societal good) have an affinity for peer review, encouraging only the best development and security practices, under risk of scrutiny otherwise.

It is of no surprise, then, why companies tend to cherry-pick code and design from FOSS projects to fuel something slightly better.

When a new idea is introduced for the first time, it is competition and money that drive results. When Bell Labs et al. dominated computing research for a long time, and the threatening Soviet Union begrudged the United States government to fund the research of NASA, these are prime examples of manifestations of driving factors for research and innovation.

But neither Bell Labs and NASA ever sold products to consumers. Instead, other companies were founded to fill this gap – not to create something radically new (often, when this occurs, they miserably fail or dramatically succeed), but rather to simply take the next step. The research has already been complete – just put it in a box, along with an instructions manual, and sell it to consumers. There’s nothing like it on the market, so it’s a perfect choice for consumers. Rake in the cash. Corner the market. And soon, a new company will form itself to take yet another baby step in innovation, and that one will be fruitful too.

When the innovation has become so clear and obvious to the public that it can be learned by undergraduates or any interested student, it is then time to charitably introduce the innovation to others. The modern computer has existed for a long time, yet Eben Upton and the Raspberry Pi Foundation took the “small” step of putting a SoC on a small board and selling it for $35. At the time, I don’t think it would have been easy to find a technologically up-to-date, general-purpose computing device at that price point and form factor. But because the Raspberry Pi Foundation did it, now many businesses exist for the sole purpose of manufacturing and selling low-cost single-board computers. As a result of this work of charity, computers are now easily accessible to all. What’s more, students can and must take courses covering the architecture of the modern computer, and some students are even tasked with constructing one from scratch.

Likewise, once an open-source project is done on a particular topic, that particular topic is essentially “done“. There are not many businesses out there that sell consumer operating systems anymore; if people seek a free operating system, there’s GNU. It’s done; why look further? Any improvements needed are a code contribution away, solving the problem for thousands of others as well. Why should companies strive to produce new modeling software if they must compete with programs like Blender and existing commercial software such as Maya?

My observation is that open-source software is the endgame. It is impossible for a commercial software with the same features as an open-source program to compete with each other; the open-source program will win consistently. Conversely, commercial software stems from open-source algorithms waiting to be applied, be it TensorFlow or Opus.

Basically, it makes sense to start a company to churn out commercial software if one is willing to apply existing research to consumer applications (take small steps); join a larger company to rapidly develop and deploy something innovative; or join academia to write about theory in its most idealistic form.

Under these observations, startup businesses fail because they attempt to innovate too much too quickly. The job is not to innovate immensely all at once – the job is to found a business under a basic, yet promising idea (the seed), produce results, and then continue taking small, gradual steps toward innovation. The rate of innovation will be unquestionable by investors – if you survive for two years, putting your new features and products at a healthy pace, then people will naturally predict the same rate for the coming future, and be more willing to invest.

Yet you would never find enough resources to make a successful startup for, say, building giant mechs or launching payloads into space. There’s just too much research to be done, and the many people who are capable (and in demand) to perform this research need coin to sustain themselves. In contrast, the military can pour any amount of money they wish to a particular project, and they could have a big walking mech that looks like the one from Avatar in less than 36 months. (I’d wager the military has already been working on this as a top-secret project.)

But do you see how much we have departed from the idea of “libre?” My conclusion is this: businesses do things quickly, while charitable people do things right. Once the research has been completed and the applications have been pitched and sold, it is then time to transition and spread the innovation to the general public. That is the cycle of innovation.

Personal protection

You may know my blog well for my rants, but if you have been or are planning to look into my personal life, you should know that I have hidden these posts. They have provided great insight into myself, but being public on the open Internet, they can also be used against me in unpredictable ways.

They explain in great detail, for instance, why I seem to lack the motivation to work on my projects, what effect this incurs on me, and what grim outlooks I have had on life in the past two years – but I do believe there are some people out there who are willing to argue nonetheless about my personal life, arguments which take mental energy and time to address.

I may open these posts in the future, but for now, a little bit of privacy might be appreciated.


I feel like publishing what songs followed me around in my head while I was in Japan, so I’ll list them here:

Kyoto and rural areas: Xyce – A summer afternoon
Crossing over Rainbow Bridge: Mirror’s Edge menu theme
Tokyo: BĂ´a – Duvet
Plane taking off back to Japan: SAVESTATES – When They Find You, Don’t Tell Them You’re Dead
After returning to Japan: Zabutom – My alien shoes

I think they are fairly dumb song choices, but I really could not get them off my head, so if you want to add to the atmosphere while reading the trip account of Japan, you can play the corresponding song.

Not sure why anyone wants to know this, though.