This serves as another reminder to myself that people from various circles actually read what I write on here.
This serves as another reminder to myself that people from various circles actually read what I write on here.
This morning, I received a “boil water” notice from the university. I immediately searched the news to investigate the exact reason – is the water contaminated, and what is it contaminated with?
However, all that I could find were two vague reports from city officials about how the treatment plants were overloaded from silt due to flooding, and that Lake Travis was only four feet away from spilling over the dam. Pressed to maintain a water pressure adequate enough for fire hoses to remain usable, the city decided to “reduce” the treatment of the water to allow enough water to be supplied, such that it is no longer at the “high standards” that the city provides for potable water.
But water treatment systems are not a black box; they are a multi-stage process! What stage of the treatment was hastened; or are stages being bypassed entirely? Surely, the filtration for particulate matter is being reduced, but the chlorine process should still be keeping the water sterile. However, none of these questions can be answered due to the vagueness of the report.
Affected treatment plants? Undisclosed. Particulate matter and bacteria reports? Nonexistent, assuming the Austin website actually works right now, which it does not.
Here is the main contradiction in their statement:
WHY IS THE BOIL WATER NOTICE IMPORTANT Inadequately treated water may contain harmful bacteria, viruses, and parasites which can cause symptoms such as diarrhea, cramps, nausea, headaches, or other symptoms.
But earlier in their statement, they stated the following:
It’s important to note that there have been no positive tests for bacterial infiltration of the system at this time.
So what bacteria am I going to kill from boiling water?
All that I can conclude is that the city of Austin is spreading fear, uncertainty, and doubt of the water quality simply to reduce stress on the system, without presenting hard evidence that the water is indeed unsafe to drink. Boiling water will not eliminate particulate matter, and from the aforementioned press release, “city officials” (whoever those are) have explicitly stated that bacteria has not yet contaminated treatment plants, so there is no bacteria to kill from boiling water.
One benefit to treatment plant operators from this warning, however, is that they now have free reign over which stages they wish to reduce or bypass, including the disinfection stage. However, due to the lack of transparency, there is no information to ascertain which stages are being bypassed – the water can really be of any quality right now, and it could even be still perfectly fine.
My questioning of this warning stems from a fundamental distrust in government decisions and communication to its citizens. People simply echo the same message, without seeming to place much thought into it: “Boil water. Boil water. Boil water.” And on the other hand, city officials might state that the treated water is completely safe to drink, despite findings of statistically significant lead concentration in some schools!
I’ll comply out of an abundance of caution (and because noncompliance has social implications), but mindless compliance and echoing of vague mass messages should not be the goal of the government. Individuals should be able to obtain enough information to make an informed decision and understand the rationale of the government in its own decisions.
It is now the next day since the announcement of the restrictions, and the technical details surrounding the problem remain vague. It seems that the restriction has indeed granted free license for treatment plant operators to modify treatment controls as they see fit, without necessarily needing to meet criteria for potable water. Moreover, it appears that the utility has known about this problem for quite some time now, and only now have they decided to take drastic action to prevent a water shortage.
I would not trust this water until the utility produces details of actions being taken in these treatment plants to fix this mess up.
Here I am on my cozy Arch Linux machine, enjoying the good life of customizability and modularity of, well, literally every component of the machine.
I look up the equivalent of DMG on Windows – apparently, DMG files also have built-in code-signing and checksum capabilities. The best part about a DMG file is that it is a multipurpose format: it can be mounted like a drive as a method of isolation, or it can be used to package a full software installation.
On Windows-land, there are only ZIP files, MSI installers, and whatever other breed of self-extracting archives and installers have been devised over the decades.
At this point, I realize that Windows is fundamentally outdated. Unable to keep up with the breakneck development of Mac OS X/macOS, Microsoft will be hard-pressed to sweep out deprecated APIs one by one.
The success of Windows is attributable to the fact that it has worked on every IBM-compatible PC since the late 1980s and has maintained a stellar record in software compatibility, a coveted characteristic of computer systems for enterprises looking to minimize software development costs. By comparison, the Macintosh has experienced various leaps in architecture, notwithstanding the high cost of the machine.
I think that the market is in need of a well-designed, uncomplicated Linux distribution that is accessible and familiar to consumers, all the while being enticing for OEMs to deploy. Such a distro would not be another Ubuntu – although it could well be Ubuntu, since Canonical has cemented its position in the open-source world. The problem with Ubuntu, however, is that it has a reputation for advice that involves the command line. A distro that is consumer-oriented keeps the intimidating terminal away!
It would fill the niche market that Chrome OS dominated: lightweight, locked-down devices mostly for browsing the Web. The part where Chrome OS failed, however, was when companies wished to port native software that a web browser lacks the performance or capability to drive, such as anything involving hardware peripherals. With a Linux base, hardware interfacing need not be sacrificed.
Would such an operating system run into legal trouble if it came with Wine or an ability to install Wine when the first Windows program is installed? What if it could run Office seamlessly?
What if it began to make some revolutionary design decisions of its own?
Honestly, I don’t know where I’m going with this anymore. Back to work.
I know nobody is going to read this terrible blog to find this, but still, I’m moderately frustrated in trying to find a decent workflow to deploy a small, single-executable, Python-based Qt application.
Even on Windows using C++, it was not so easy to build statically until I found the Qt static libraries on the MinGW/MSYS2 repository – then building statically became a magical experience.
So far, the only deployment tools that promise to deploy a Python Qt program as a single executable are PyInstaller and pyqtdeploy.
PyInstaller works by freezing everything, creating an archive inside the executable with the minimum number of modules necessary to run, invoking UPX on these modules, and then when the program is run, it extracts everything to a temporary folder and runs the actual program from there. As such, startup times seem to be around 3-5 seconds, and the size of the executable is about 30 MB.
pyqtdeploy works by freezing your code, turning it into a Qt project with some pyqtdeploy-specific code, and then compiling that code as if it were a C++-style project, so you could compile a static version of Qt against this generated code.
But in order to use pyqtdeploy, you need to have the libraries at hand for linking:
LIBS += -lQtCore LIBS += -lQtGui LIBS += -lpython36 LIBS += -lsip
There’s no way around it – you must build Python and the other dependencies from scratch, and this could take a long time.
I have also encountered strange errors such as
SOLE_AUTHENTICATION_SERVICE being undefined in the Windows API headers.
I mean, I suppose pyqtdeploy works, but is this even a trajectory worth going? What would be the pre-UPX size of such an executable – 25 MB, perhaps? That would put it on par with the AO executable.
I might as well write the launcher in C++, or switch to Tkinter.
After Hurricane Maria, I was invited to a Slack group in Puerto Rico to offer my programming expertise for anyone who needed it. After beginning to comprehend the magnitude of the communications problem, I scoured for ways to set up long-distance mesh networking – no, not mobile apps like FireChat that rely on short-distance Wi-Fi or Bluetooth to establish limited local communications – rather, ways to post and find information across the entire island, with relays that could connect through the limited submarine cables to the outside Internet as a gateway for government agencies and worried relatives.
During the three weeks in my interest of this project (but powerlessness in doing anything, as I was taking classes), I investigated present technologies (such as 802.11s), as well as capabilities of router firmware, theoretical ranges of high-gain antennas, and other existing projects.
I saw Project Loon, but never expected much of it. The project must have taken a great deal of effort to take off, but unfortunately, it seemed to have a high cost with little return. Essentially, balloons were sent from some point on Earth and then led by high-altitude winds to cross Puerto Rico for a few hours, eventually to land at some location in the United States. Despite this effort, I found very few reports of actual reception from a Project Loon balloon.
Meanwhile, someone in the mesh networking Slack channel informed me that they were working with a professor at A&M to implement a mesh network from a paper that was already written. While I ultimately never saw the implementation of this mesh network, I felt put down by my naivete, but accepting that my plans were undeveloped and unexecutable, I moved on with the semester. Surely, mobile carriers must have had all hands on deck to reestablish cell phone coverage as quickly as possible, which is certainly the best long-term solution to the issue.
However, many places other than Puerto Rico remain in dire need of communications infrastructure, in towns and villages that for-profit carriers have no interest in placing coverage in. Moreover, there are islands at risk of becoming incommunicable in case of a hurricane.
I am looking to start a humanitarian mission to set up a mesh network. I find that there are three major characteristics to a theoretical successful mesh network: resilience, reach, and time to deploy.
A mesh network that is not resilient is flimsy: one failed node, perhaps bad weather or even vandalism, should not render all of the other nodes useless. Rather, the network should continue operating internally until connection can be reestablished with other nodes, or the situation can be avoided entirely by providing connections with other node, or even wormholing across the mesh network via cellular data.
A mesh network that does not reach does not have any users to bear load from, and thus becomes a functionally useless work of modern art. No, your users will not install an app from the app store – besides, with what Internet? – or buy a $50 pen-sized repeater from you. They want to get close to a hotspot – perhaps a few blocks away in Ponce – and let relatives all the way in Rio Piedras know that they are safe. And to maximize reach, of course, you need high-gain antennas to make 10-to-15-mile hops between backbone nodes that carry most of the traffic, which then distribute the traffic to subsidiary nodes down near town centers using omnidirectional antennas.
A mesh network that takes too long to deploy will not find much use in times of disaster. Cellular companies work quickly to restore coverage – a mesh network simply cannot beat cell coverage once it has been reestablished. First responders will bring satellite phones, and chances of switching to an entirely new communication system will dwindle as the weeks pass as volunteer workflows solidify.
How do I wish to achieve these mesh networking goals?
I imagine that the mesh network will predominantly serve a disaster-oriented social network with various key features:
One issue with this idea, I suppose, is the prerequisite of having a fully decentralized social network, which has yet to be developed. But we cannot wait until the next big disaster to begin creating resilient mesh networks. We must begin experimenting very soon.
Last time I read about threading, I read that “even experts have issues with threading.” Either that’s not very encouraging, or I’m an expert for even trying.
There are a bunch of threads and event loops in AC, and the problem of how to deal with them is inevitable. Here is an executive summary of the primary threads:
My fear is that the network threads will all get splintered into one thread per character session, and that Pyglet instances on the UI thread will clash, resulting in me splintering all of the Pyglet instances into their own threads. If left unchecked, I could end up with a dozen threads and a dozen event loops.
Then, we have the possibility of asset worker threads for downloading. The issue with this is possible clashing when updating the SQLite local asset repository.
The only way to properly manage all of these threads is to take my time writing clean code. I cannot rush to write code that “works” because of the risk of dozens of race conditions that bubble up, not to mention the technical debt that I incur. Still, I should not need to use a single lock if I design this correctly, due to the GIL.
One year since my arrival from Japan, I have learned an academic year’s worth of knowledge and grown a year more mature.
I spent vivid days enjoying lunch with others and lonely nights sulking in my dorm. I spent boring Sundays eating lunch at Kinsolving and busy days going to club meetings with people I never saw again.
As the sun changed inclination, so did my mind, it seems. Perspectives have changed. My mind melds and rearranges itself, disconnecting itself forever from the old memories of the physics lab and the traumatizingly strenuous AP exams.
As the semesters progress, people come and go. I am pulled out of one world and thrust into another, yet Japan still recalls like it happened last week. While I cannot recall all memories, the key memories still feel relatively vivid. I still feel the cotton of the yukata on my body; the refreshing chill of the small shower beside the onsen; the onsen’s disappointingly intolerable warmth; the calm, collected smile of the cashiers and service workers; the bittersweetness of having only been able to visit Akihabara once; the American pride of my Japanese teacher.
It is not certain what I will be doing on June 28, 2019, but it is certain that I will be saving money to return to Japan in 2020 for a study-abroad program.
When I noted in November that the experience will never happen again, I was correct – but this is merely to make way for even greater experiences in the unknown future.
My friend wishes to go to Japan for another week, but after viewing airline price ranges and possible dates, I politely observed that one week was simply not enough time – the insatiable longing of returning to Japan would simply repeat itself. No: I need an entire semester to evaluate the culture of Japan, its people, and what it holds in store for enjoyment. I wish not to merely cherry-pick what I wish to know, but rather to immerse myself completely in the language and culture. This should be enough to satisfy any individual.
However, I recognize that after this point, reminiscing about specific details of the trip is an obsession. I must strive to look forward and continue my studies of Japan from a holistic perspective.
I got an S9 from my father as part of a deal. I did not want the phone, but he got it anyway. This is a flagship device costing almost $1,000; not exactly a small step-up from the S4.
I have been trying not to get the phone dirty with my sweaty hands, but too late for that. It appears to be a well-built and well-designed phone, although it looks prone to damage without adequate casing.
I am not particularly fond of two things: materialism, and giving away random information to any app that wants it.
I mention materialism because nothing lasts forever – the S4, at its time, was the pinnacle of technology, but we have somehow advanced even further in five years. It is difficult to imagine what a phone will look like in five more years. One must also remember that the smartphone is an instrument designed to get things done – an integrated PDA and cell phone – although these days it serves more as a game console.
There are also immense privacy risks one is taking simply by using this phone. Android has grown to such tremendous complexity that even I, a programmer, cannot fully comprehend the full design of the Android system. There are also many more apps that grab the location, now that optimizations have been made to prevent battery overuse from obtaining a fine location. And the system has grown to become so tightly integrated that practically anything can access anything (if you allow it to).
The strongest aspect of this phone is its speed – whereas Google Maps takes 6 seconds to cold-start on my S4, it loads in about 1 to 1.5 seconds on the S9; essentially instantly.
Finally, this phone allows me to place “HD Voice,” “VoLTE,” “Wi-Fi,” and “HD Video” calls. All of these things seem to be exclusive to AT&T users, with a supported SIM card, with a supported phone (i.e. not an iPhone), in a supported location, on both sides. In essence, the feature is useless for 90% of calls. How much longer will it take to develop and adopt a high-quality communications infrastructure that is standard across all devices and all carriers, including iPhones? What ever happened to SIP – why didn’t Cingular give everyone a SIP address back in the day? Why do I have to use a cell phone to place a call using my number? Why do we still use numbers – when will we be able to switch to an alphanumeric format like e-mail addresses?
Yes, I understand that we have to maintain compatibility with older phones and landline via the PSTN – whatever that is these days – and we also have to maintain the reliability of 911 calls.
The walled-garden stubbornness of Apple does not help much, either. Apple simply stands back and laughs at the rest of the handset manufacturers and carriers, who are struggling to agree on common communication interfaces and protocols. Will Apple help? Nope. Their business thrives on discordance and failure among the other cell phone manufacturers to develop open standards. And when they finally agree on an open standard ten years later – yoink! – Apple adopts it instantly in response to the competition.
As for other features, I found the S9’s Smart Switch feature to work perfectly: it was able to migrate everything on my S4, even the things on my SD card (I recommend removing the SD card from the original phone before initiating a transfer). It did not ask me about ADB authorization or anything like that, so I wonder how it was able to accomplish a connection to the phone simply by unlocking it.
When Android will finally have a comprehensive backup and restore feature, however, remains beyond my knowledge. This is Android’s Achilles heel by far.
Oh, and I forgot one last thing about the S9: it has a headphone jack 🙂
Let’s Encrypt has been operational for about two years now, although the project originally began in 2015. Let’s Encrypt is the saving grace of HTTPS, but exactly because it is the saving grace of HTTPS is the reason that I dislike its endorsement.
Suppose that tomorrow, a security researcher discovers a critical flaw in CertBot or some other part of the Let’s Encrypt certificate issuance system, and in a week, almost every Let’s Encrypt cert is going to get tossed into the CRL, with no ability to create new certs.
They couldn’t do it. They couldn’t possibly toss 100 million certificates into the fire, because LE has already reached a point where it is too big to fail. You can’t tell your users, who expect their website encryption to come for free, “Hey, your CA got compromised, so you’re going to have to pay $20 or more for a cert from Verisign, GeoTrust, or Comodo, because there are no other free, secure CAs available. Sorry.”
And if it comes to that, two things happen:
You have to remember the situation before Let’s Encrypt: Browser vendors, most especially Google and Mozilla, were pushing as hard as they could toward eradicating HTTP and enforcing HTTPS everywhere, in light of the Edward Snowden and NSA hysteria-bordering-paranoia. However, SSL/TLS certificate options were limited at the time: existing free certificate services had been founded long before then and were commonly suggested for people who were absolutely desperate for a free certificate, but were nonetheless unpopular among CA maintainers due to rampant abuse. In other words, on the idealistic side, people believed that every site ought to have HTTPS. But on the practical side, they asked if your site really needed HTTPS if you can’t afford a certificate and you are just serving static content.
Today, those old free CAs have been abandoned by CA maintainers in favor for the one CA to rule them all: the ISRG/Let’s Encrypt CA. I mean, we’re obviously not putting all our eggs in one basket here – if something goes wrong, we still have hundreds of CAs to go by, and if an owner really needs their HTTPS, they can just shell out $100 for a cert. That’s right, if you’re a website owner who cares more about their website than the average Stack Overflow user, you should really consider shelling out money, even though we’re sponsoring a cert service that is absolutely free! Oh, and if something goes wrong, you get what you paid for, right? My logic is totally sound!
Let me reiterate: in the case of a future catastrophe, assuming that we are enough time into the future that browsers have placed so much trust in the HTTPS infrastructure that they now put prevent casual connections to insecure HTTP websites, there are two answers based on how much money you have:
So, I think the problem at hand here is the philosophy behind trust. Trust is such a complicated mechanic in human nature that it cannot be easily automated by a computer. When we make a deal on Craigslist, how do we know we’re not going to end up getting kidnapped by the guy we’re supposed to be meeting with? Is the only reason a bureaucracy trusts me as an individual because I can give them an identification card provided by the government? But how can I, as an individual, trust the bureaucracy or the government? Only because other people trust them, or people trust them with their money?
How does this tie into the Internet? How can I trust PKI, the trust system itself? What happens if I tie a transactional system – specifically the likes of Ethereum – into a web-of-trust system such as PGP? What happens if I tell people, “vote who you trust with your wallets“? What is a trustable identity in a computer network? What remedies does an entity have if their identity is stolen?
I have held off on making a post like this for a long time now, but I think it is now the time to do so.
I thought things would improve with Windows, but for the past five years (has time really gone so quickly?), Microsoft has not done anything with their power users, effectively leaving them in the dark to “modernize” their operating system for small devices (netbooks and tablets).
Microsoft knows so well that power users are leaving in droves to Linux, so they developed the Windows Subsystem for Linux – essentially a remake of Interix – to allow people to “run Ubuntu” on their machines all while keeping the familiar taskbar on their desktops and without having to tread through the territory of repartitioning, package management, and drivers. By taking advantage of distros’ terse and hard-to-read documentation as an “advantage” for staying on Windows, Microsoft has kept the uninformed lured into Windows 10.
Let’s remember what Windows used to be primarily for: office applications. Professionals and businesspeople still use Windows every day to get their work done. They were so invested in the system, in fact, that some of them took to learn keyboard shortcuts and other nooks and crannies of the system to do work even faster (or if using a mouse was not comfortable).
Today, Windows is used for three reasons:
The weight of Win32’s legacy features is too heavy of a burden to keep Windows moving forward as it is. Windows 10 has a multi-generational UI: modern UI (e.g. PC settings menu) from Windows 8 and 10, Aero UI (e.g. Control Panel) from Windows Vista and 7, Luna icons (e.g. Microsoft IME) from Windows XP, and UI that hasn’t changed since the very beginning (e.g. dial-up, private character editor) from Windows 98 and 2000.
The problem is that many business users still depend on Win32 programs. Microsoft is in an extremely tight spot: they must push for new software, all the while keeping friction as low as possible during the transition process.
But if Microsoft is going to eradicate Win32, why bother developing for UWP? Why not take the time now to develop cross-platform applications? Hence why companies that care – that is, companies that do not sell their 15-year-old software as if it were “new” in 2018 – are targeting either the web or Qt (which is very easy to port). Other programs that require somewhat tighter integration with Windows are very likely to use .NET, which means pulling out C#.
Here are some reasons I still use Windows on my desktop:
However, these reasons are becoming less relevant: I am unfamiliar with Windows 10 (due to its inconsistent UI), and Windows 7 is losing support soon. Moreover, a reliable method of installing Office through Wine is being developed, and new technologies that allow hardware pass-through, such as VT-d, have caused gaming performance on a VM to match almost that of natively running Windows.
I am also tired of the support offered for Windows: those who actually know what they are talking about are called “MVPs,” and everyone else simply seems to throw canned messages for support requests. For instance, if you look up “restore point long time” on Google, the first result is a Quora question called, “Why does system restore point take so long on Windows 10?” with some nonsensical answers:
None of them answer the question: why does creating a system restore point take so long?
You can probably find similar blabber for why Windows Installer takes so long, or some technical feature of Windows.
These days, I don’t really think many people know how Windows actually works. How in the world am I going to use an operating system that nobody knows how it actually works?
In comparison, any other well-supported Linux distribution has people so tough on support that they will yell at you to get all kinds of logs. With Windows, nobody really knows how to help you; with Linux, nobody wants to bother helping such a lowly, illiterate n00b as you.
As for Wine, if Microsoft did not financially benefit from it, Microsoft would have taken down the project before it ever even took off. My suspicion is that once Wine is at a stable state, Microsoft will acquire (or fork) the project and use it as a platform for legacy applications, once they have eradicated Win32 from their new Windows.
All in all, Windows has served me very well for the past years, but I have grown out of it. All the while, I wish to stay away from the holy wars fought daily in the open-source world, most especially the war between GPL and BSD/MIT, although they do seem to be getting along these days. The problems arise when MIT code is about to get linked with GPL code, and that’s when developers have to say “all right, I can relicense for you,” or, “absolutely not, read the GPL and do not use my software if you do not agree with it.”