Category: Uncategorized

Film piracy

The film is a captivating, immersive medium that narrates a story with a synchronized visual and audio track. In its similarity to television in that it can be consumed passively rather than requiring an active effort in imagination, movies have reached far more Americans than books. In the medium’s success, it has become a lucrative business over the past one hundred years.

Today, the majority of Americans know of only one type of movie: the movie that is approved by the Motion Picture Association of America, an industry group that comprises of six media conglomerates with near-exclusive access to what is commonly known as the movie theater.

The American has little access to the independent or international film; only has streaming eased the resistance of these films’ entry into the market, and even then their exposure to Americans is often filtered through contracts managed by the Big Six.

The end result is the offering of a relatively small catalog of extremely popular mainstream movies, whose distribution is tightly controlled from production to presentation. A consumer can watch an American movie only through a handful of means:

  • If still in theaters, an MPAA-approved facility.
  • Blu-ray disc, with an up-to-date Blu-ray player and a high-definition TV with HDCP.
  • DVD, with a player of the correct region, assuming that the movie was distributed by DVD.
  • A constellation of video-on-demand services, assuming the service has an up-to-date contract with the distributor:
    • Netflix
    • Amazon Prime Video
    • iTunes

Ironically, the battle was lost decades ago in the music industry, with purchased music distributed with no DRM.

The problem is that due to the increasing volume of media and the tightening demands by licensors to provide adequate protection for media, the film medium as a whole is becoming increasingly inaccessible. One can no longer go to the video store, find anything to one’s content, and pop it into the player: one must sift through a variety of providers to see who actually distributes the movie, at a variety of price points.

And what happens if the content is not accessible? What if the movie or the TV show is not accessible in one’s country due to contractual limitations? What if technological advances prevent one from enjoying purchased content? What if the content is no longer distributed by anyone?

These are reasons piracy is rationalized – not because it is in people’s willful intention to commit robbery, but because piracy has unfortunately made it far more convenient to acquire content than through a legitimate transaction. Content is easily searchable, and its results are displayed on a minimalistic table that lists every version of the content ever released. A few clicks later, and the content is downloaded as a file to one’s hard drive, and the download manager intelligently seeds the file back to its peers in a gesture of cooperation. The content (which, surprising to some, conforms to high quality standards) has already been decrypted, stripped of DRM, and is ready to be played back on virtually any device.

There is no question why the MPAA seeks to drill anti-piracy campaigns into the minds of Americans: because despite all of its efforts, and despite the illegality and unethical nature of piracy, the members of the MPAA have been unable to compete with the convenience of peer-to-peer downloading and DRM-free video.

For instance, suppose I wish to watch Cowboy Bebop: The Movie. While the movie’s theatrical reception was subpar, it is still original Cowboy Bebop content that is worth watching – like an extended episode. My ideal solution is to look it up on an ad-free, subscription-based streaming service such as Netflix, hit play, and then hit the “cast” button to any television in my house or even a monitor on my computer.

However, Cowboy Bebop: The Movie is not available. At all. Its only availability is through an obscure DVD release with varying prices, indicating that some DVDs are region-locked to Japan, while others can be played in the US.

Ultimately, the easiest way to play this is by searching it in a torrent database, downloading it, and then serving the file to the television.

It is the sad truth that despite having played movies from discs and boxes in my personal possession, purchasing high-quality content and then playing it from servers in my personal possession is frowned upon as piracy. (Ironically, even Apple designed this correctly: movies can be streamed either from other computers that have the downloaded content, or Apple’s own servers.)

Ultimately, piracy is a consequence of attempting to navigate a broken system of film distribution, and those astute enough to recognize this tend to pull away from mainstream media to enjoy more traditional media, such as books, which convey the same messages and experiences in more elegant terms.

A troubled relationship with GitLab

When I discovered GitLab in February, a rebellious passion flared up – a desire to break away from the omnipresent, walled-garden development ecosystem that is GitHub.

After GitHub had supposedly banned one of my developers for Attorney Online, that disdain for GitHub flooded over (although it was only later that I considered that perhaps he had involved himself in something he did not tell me about). I switched to GitLab in two days and was content by its fully-featured nature: it could show icons for repositories, group repositories into projects, mirror in both directions, and it even came with a fully-featured CI! Satisfied that I could escape the grasp of GitHub, I moved the main Attorney Online repos to GitLab to make a pipeline and allow my banned developer to contribute.

But six months on, the cracks of GitLab were beginning to show: odd bugs, thousands of issues on the GitLab main repo, slow page load times, and a seemingly endless amount of switches and dropdowns on every panel. It was like Jira all over again – and things were not improving.

In those same six months, GitHub was making leaps and bounds to compete with GitLab’s bells and whistles. Adding features such as security bulletins, issue transferring, jump-to-definition, and sponsorships, GitHub was also trying to reel its open-source users back into the platform – and they also seemed to cut back on their omnipotent moderation, instead granting repositories the tools to moderate themselves.

The distinction was thus made clear to me: GitLab for enterprise, GitHub for community. Enterprises don’t care about simplicity, but hobby developers like myself do. Both also succeeded in adding features that were orthogonal to each other – only GitHub supports jump-to-definition, but only GitLab supports arbitrary mirroring rules.

After reverting my move to GitLab, I saw that GitLab was flexible enough to allow me to reap the benefits of both ecosystems – GitHub for project management, and GitLab for its advanced CI pipeline and artifact hosting.

In the end, it is inevitable for today’s developer world to spin around an indispensable GitHub: it is the product that tamed a complicated version control system and popularized it in a simple-to-use program for managing open-source projects.

Using JS for Qt logic

Sorry, but I’m going to have to take sides on this one: Electron sucks. At their core, they convert a program into an embedded web browser with somewhat looser restrictions on interfacing with the host system, only to make it possible for web developers to develop for the desktop without learning a new language. The benefits of Electron are heavily biased toward developers and not the end-user, which is really the target of the entire user experience.

The problem is that HTML5 makes for an abstract machine that is far too abstract, and the more layers of abstraction that sit between the software and the hardware, the more overhead there will exist from translation and just-in-time compilation. The result is a web browser that uses 120 MB of shared memory and 200 MB of resident memory per tab.

For a web browser, though, abstraction is good. It makes development easy without sacrificing user experience for what is essentially a meta-program. But for any other desktop application with a specific defined functionality, this abstraction is excessive.

This is generally why I like Qt: because it is one of the only libraries left that continues to support native desktop and embedded application development without sacrificing performance. The one problem with it, however, is that performance requires the use of C++, and most people do not know how to work C++. Moreover, it requires double the work if the program is also being targeted for the web.

There does exist QML, which removes most of the C++ and exposes a nice declarative syntax that combines both layout and logic into a single file. However, it has two significant problems: first, it adds even more cruft to the output program, and custom functionality still requires interfacing with C++ code, which can get a little difficult.

Qt’s main Achilles’ heel for a long time has been targeting the web. There are various experimental solutions available, but none of them are stable or fast enough to do the job yet.

I’ve been coming up with an idea. Qt exposes its V4 JavaScript engine (a JIT engine) for use in traditional C++ desktop programs. What I could do is the following:

  • Write JS code that both the browser and desktop clients share in common, and then make calls to some abstract interface.
  • Implement the interface uniquely for each respective platform.

For instance, the wiring for most UI code can be written in C++, which then exposes properties and calls events in JS-land. Heck, Qt already does most of that work for us with meta-objects.

How do I maintain the strong contract of an interface? You need a little strong typing, don’t you? Of course, of course – we can always slap in TypeScript, which, naturally, compiles to standards-compliant JavaScript.

The one problem is supporting promises in the JS code that gets run, which mostly relies on the capabilities of the V4 engine. I think they support promises, but it does not seem well documented. Based on this post about invoking async C++ functions asynchronously, I think that I need to write callback-based functions on the C++ side and then promisify the functions when connecting between the JS interface and the C++ side. That shouldn’t be too hard.

Note that important new features for QtJsEngine, such as ES6, were only added in Qt 5.12. This might complicate distribution for Linux (since Qt continues to lag behind in Debian and Ubuntu), but we’ll get there when we get there – it is like thinking about tripping on a rock at the summit of a mountain when we are still at home base.

Indexing the past and present

With the shutdown of GeoCities Japan, we are reaching an important point in the history of the Internet where important historical information is vanishing while being replaced with new information that is hidden away as small snippets of information in social media systems.

It is becoming increasingly apparent that a vast trove of information is simply missing from Google Search. Aggressively pushing for well-ranked sites, user-made sites with obscure but useful information are not as indexed, and their lack of maintenance leads to their loss forever.

For instance, I was only able to find MIDI versions of Pokemon Ruby and Sapphire music from a site hosted by Comcast. After the shutdown of Comcast personal sites, the information was lost to indexing forever and hidden away in the Internet Archive.

What I propose is the indexing and ranking of content in the Internet Archive and social media networks to make a powerful search engine capable of searching, past, present, and realtime data.

A large fault of the Google Search product over the years has been its dumbing down of information during the aggregation process of the Knowledge Engine that inhibits the usefulness of complex queries. If a query is too complex (i.e. contains keywords that are too far apart from each other), Google Search will attempt to ignore some keywords to fit the data that it has indexed, which only fits into particular categories or keywords. If the whole complex query is forced, though, Google Search will be unable to come up with results because it does not index or rank webpages in a way that is optimized for complex queries – not because the information does not exist.

The corpus of information is also diversifying: there is more information in e-books, chat logs, and Facebook conversations than can be found simply by crawling the hypertext. But the Google search engine has not matched this diversification, opting simply to develop the Knowledge Graph to become a primary and secondary source of information.

I think this would be a great direction a search engine such as DuckDuckGo could take to compete more directly with Google Search in a dimension other than privacy. After all, Google Search is no longer Google’s main product.

One year after Japan

One year since my arrival from Japan, I have learned an academic year’s worth of knowledge and grown a year more mature.

I spent vivid days enjoying lunch with others and lonely nights sulking in my dorm. I spent boring Sundays eating lunch at Kinsolving and busy days going to club meetings with people I never saw again.

As the sun changed inclination, so did my mind, it seems. Perspectives have changed. My mind melds and rearranges itself, disconnecting itself forever from the old memories of the physics lab and the traumatizingly strenuous AP exams.

As the semesters progress, people come and go. I am pulled out of one world and thrust into another, yet Japan still recalls like it happened last week. While I cannot recall all memories, the key memories still feel relatively vivid. I still feel the cotton of the yukata on my body; the refreshing chill of the small shower beside the onsen; the onsen’s disappointingly intolerable warmth; the calm, collected smile of the cashiers and service workers; the bittersweetness of having only been able to visit Akihabara once; the American pride of my Japanese teacher.

It is not certain what I will be doing on June 28, 2019, but it is certain that I will be saving money to return to Japan in 2020 for a study-abroad program.

When I noted in November that the experience will never happen again, I was correct – but this is merely to make way for even greater experiences in the unknown future.

My friend wishes to go to Japan for another week, but after viewing airline price ranges and possible dates, I politely observed that one week was simply not enough time – the insatiable longing of returning to Japan would simply repeat itself. No: I need an entire semester to evaluate the culture of Japan, its people, and what it holds in store for enjoyment. I wish not to merely cherry-pick what I wish to know, but rather to immerse myself completely in the language and culture. This should be enough to satisfy any individual.

However, I recognize that after this point, reminiscing about specific details of the trip is an obsession. I must strive to look forward and continue my studies of Japan from a holistic perspective.

Migration event soon

I tried to connect to my own website on Friday, but the connection kept timing out. My mind raced with all of these awful thoughts: maybe some script kiddie finally breached the PHP process and decided to bring everything down. Or perhaps a disk failed on the old SCSI RAID array, and now the server is just waiting for me to connect a keyboard and press Enter all the way back at home to start the server in degraded mode.

But alas, the reality was none of it. Upon returning home on Saturday, I entered the attic and saw that the server was off, fans spinning at idle. I impatiently turn it on, the machine roaring to life once again. I supervise the whole process: everything good. Maybe there was a power outage?

Yet more wrong guesses. The culprit was my father, who decided to turn the server off (God knows in what way – did he really push the power button until it turned off?) without any express notice. Later he made an off-hand remark about how he had turned the server off, not knowing that I turned it back on again.

I want – well, now need – to migrate the server. It’s old, it’s heavy, it’s loud, and it’s expensive in power costs (costs about as much as the pool filter in kilowatt-hours per month). It’s pointless to keep it around, and probably embarrassing to explain why I still use it.

My main choices are to throw the site into my Digital Ocean droplet. I could use a Docker container but then I would have to learn how to deal with volatility and general maintenance.

There is also the option to convert everything into Jekyll; the main problem with this is that I am very unfamiliar with Ruby, and I would lose the markup characteristics of HTML (at least that’s the impression they give me). On top of that, I don’t know how to transplant my blog template into a Jekyll template (it’s not my template!) and I don’t want to give into the overused templates they offer. And then after that, where will I host the site? GitHub? There’s no reason for me to push my rants into GitHub, so the world can see what kinds of “contributions” I make every couple of weeks.

Finally, there is the option to move into a Raspberry Pi, which would grant me the benefit of continuing access to my home network, zero maintenance costs (my parents pay for power), and minimal changes to the web stack I currently use.

So immediately before leaving off for college again, at the cost of probably arriving late, I fumbled around for my Raspberry Pi and connected it to the Ethernet port in my room. I guessed the password a couple of times via SSH and then just decided to pull out the keyboard and break into it locally, so that I could remember what the password was. Oh, right, it’s those credentials. I shove the keyboard and dongle back into my duffel bag, gather my other things, and finally set out.

Now, it is my responsibility to get the RPi up to speed, as the new successor of the PowerEdge 2600.

Domain change

After an entirely unexpected drop of the extremely popular homenet.org domain (yes, visitors from Google, “homenet.org is down”!), it became impossible to reach the website via longbyte1.homenet.org due to an unreachable path to FreeDNS. Thus, I decided to just finish moving to n00bworld.com. It took a while to figure out how to get WordPress back up and pointing to n00bworld.com, but I eventually succeeded.

What I do not know, however, is if I will succeed in finishing the account of the Japan travel. I have been putting that off for too long now. Ugh.