Japan: the hyperfunctional society: part 1

This is intended to be a complete account of my events in an eight-day trip to Japan, which had been planned for about two years by my native-speaking Japanese teacher, was organized by an educational travel agency, and included 26 other Japanese students with varying levels of knowledge.

Names have been truncated or removed for the sake of privacy.

After many intermittent lapses in editing, I decided to just split it into two as it was getting increasingly difficult to get myself to finish the narrative, but at the same time did not want to hold back the finished parts. I am not intending to publish this for money or anything like that; please excuse my limited vocabulary and prose during some dull parts. (more…)

Domain change

After an entirely unexpected drop of the extremely popular homenet.org domain (yes, visitors from Google, “homenet.org is down”!), it became impossible to reach the website via longbyte1.homenet.org due to an unreachable path to FreeDNS. Thus, I decided to just finish moving to n00bworld.com. It took a while to figure out how to get WordPress back up and pointing to n00bworld.com, but I eventually succeeded.

What I do not know, however, is if I will succeed in finishing the account of the Japan travel. I have been putting that off for too long now. Ugh.

Internet

Without the Internet, I would never have amassed the knowledge I currently hold today. The wild success of the knowledge powertrains of Wikipedia and Google fail to cease captivating users into learning something new every day.

Yet, I loathe the Internet in numerous ways. It’s become what is virtually (literally virtually) a drug habit, and in a way worse than a drug habit because I depend on it for social needs and information. Without it, I would lose interesting, common-minded people to talk with, as well a a trove of information that I would have to buy expensive books for.

But without the development of the Internet, what would humanity be…? I suppose we would return to the days where people would actually be inclined to talk face-to-face, invite each other to their houses, play around, sit under a tree reading a book, debug programs, go places, make things. It wouldn’t necessarily be a better future, but it would certainly be a different one. If it took this long to develop the Internet (not very long, actually), imagine the other technologies we are missing out on today.

And then there is the problem of the masses. The problem lies not in the quantity itself, it’s that attempting to separate oneself from the group merely attempts to imply elitism. And you end up with some nice statistics and social experiments and a big beautiful normal model, with very dumb people on one end and very intelligent people on the other.

This wide spectrum means that conflict is abound everywhere. People challenge perspectives on Reddit, challenge facts on Wikipedia, challenge opinions on forums, challenge ideas on technical drafts and mailing lists. And on YouTube, people just have good ol’ fistfights over the dumbest of things.

On the Internet, the demographic is completely different than in human society, even if the Internet was supposed to be an extension of human society. The minority – yes, those you thought did not exist: the adamant atheists, the deniers, the libertarians, the conspiracists, the trolls – suddenly become vocal and sometimes violent. The professionalism with which the Internet was designed in mind is not to be found on any of the major streams of information. This is not ARPANET anymore. These are not scientists anymore studying how to run data over wires to see if they can send stuff between computers. These are people who believe the Internet is freedom at last. Freedom to love, freedom to hate; to hack, to disassemble, to make peace, to run campaigns, to make videos, to learn something, to play games, to make opinions, to argue, to agree, to write books, to store things, to pirate software, to watch movies, to empathize, to converse, to collaborate, or just to tell the world you really hate yourself.

Thus, I am a victim of freedom and a slave to it. My friends do not talk to me anymore. I am just left with solitude and a keyboard.

Some ideas

Concept of AI itself

I’ve glanced at many papers (knowing, of course, that I know very little of their jargon) and concluded that the recent statistical and mathematical analysis of AI has simply been overthought. Yet the theory of AI from the 70s and 80s delves to entirely conflicting perspectives of the driving force of AI in association with the morality and conscious factors of the human brain.

Think about the other organs of the body. They are certainly not simple, but after 150 years, we’ve almost figured them out, how they work mechanically and chemically. The challenge is how they work mathematically, and I believe that an attempt to determine an accurate mathematical representation of the human body would essentially lead to retracing its entire evolutionary history, up to the tiny imperfections of every person across each generation. Just as none of our hands are shaped the same, our brains most likely are structured uniquely, save for its general physical structure.

I conjecture that the brain must be built on some fundamental concept, but current researchers have not discovered it yet. It would be a beautiful conclusion, like the mass-energy equivalence that crossed Einstein’s mind when he was working in the patent office. It would be so fundamental that it would make AI ubiquitous and viable for all types of computers and architectures. And if this is not the case, then we will adapt our system architectures to the brain model to create compact, high-performing AI. The supercomputers would only have to be pulled out to simulate global-scale phenomena and creative development, such as software development, penetration testing, video production, and presidential-class political analysis and counsel.

Graph-based file system

Traditional file systems suffer from a tiny problem: their structure is inherently a top-down hierarchy, and data may only be organized using one set of categories. With the increasing complexity of operating systems, the organization of operating system files, kernel drivers, kernel libraries, user-mode shared libraries, user-mode applications, application resources, application configurations, application user data, caches, and per-user documents is becoming more and more troublesome to attain. The structure of POSIX, in the present, is “convenient enough” for current needs, but I resent the necessity to follow a standard method of organization when it introduces redundancy and the misapplication of symbolic links.

In fact, the use of symbolic links exacerbates this fundamental problem of these file systems: they work on a too low level, and they attempt to reorganize and deduplicate data, but simply increasing the complexity of the file system tree.

Instead, every node should be comprised of a metadata as well as data or a container linking to other nodes. Metadata may contain links to other metadata, or even nodes comprised solely of metadata encapsulated as regular data. A data-only node is, of course, a file, while a containerized node is a directory. The difference, however, is that in a graph-based file system, each node is uniquely identified by a number, rather than a string name (however, a string name in the metadata is to be used for human-readable listings, and a special identifier can be used as a link or locator of this node for other programs).

The interesting part about this concept is that it completely defeats the necessity of file paths. A definite, specific structure is no longer required to run programs. Imagine compiling a program, but without the hell of locating compiler libraries and headers because they have already been connected to the node where the compiler was installed.

The file system size could be virtually limitless, as one could define specifics such as bit widths and byte order upon the creation of the file system.

Even the kernel would base itself around the system, from boot. Upon mount, the root node is retrieved, linking to core system files and the rest of the operating system; package management to dodge conflicts between software wouldn’t be necessary, as everything is uniquely identified and can be flexibly organized to correctly define which applications require a specific version of a library.

In essence, it is a file system that abandons a tree structure and location by path, while encouraging references everywhere to a specific location of data.

Japanese visual novel using highly advanced AI (HAAI)

This would be an interesting first product for an aspiring AI company to show off its flagship “semi-sentient” AAI product. Players would be able to speak and interact with characters, with generated responses including synthesized voices. A basic virtual machine containing an English and Japanese switchable language core, a common sense core (simulating about ten years’ worth of real life mistakes and experiences), and an empathy core (with driver, to be able to output specific degrees of emotion) should be included in the game, which developers then parametrize and add quirks for each character, so that every character finishes with a unique AI VM image.

In fact, the technology showcased would be so successful that players would spend too much time enjoying the authentic human-like communication, getting to know the fictional characters too well, warranting the need to place a warning for players upon launching the game (like any health and safety sign) stating that “This game’s characters use highly advanced artificial intelligence. No matter how human-like these fictional characters interact, they are not human beings. Please take frequent breaks and talk to real, human people periodically, to prevent excessive attachment to the AI.”

EF review for Japan

They said they’d be posting my review “this fall,” which I guess implies that they screen and censor each review for any personal information. Also, I had to write the review in a tiny textbox in Internet Exploder because it failed to work in any other browser, and when I go to the “write review” menu, it’s as if I had never submitted a review in the first place. What a horrible web infrastructure their website has.

I’ll post my full account of my experience in Japan in a few days, but for now, please enjoy my scathing three-star review of the EF tour. The country is great, but the tour was certainly not.


One cannot review the culture and aspects of a country; it is not something that can be placed stars on. You can choose any country that EF offers tours for and expect a great experience simply being present in a new environment with classmates. This part does not change with any educational tour or travel agency.

Thus, I will focus on primarily the tour itself, which is the part that EF specifically offers in competition with other travel agencies. I will cover praise and criticism by points rather than in chronological order.

Praise

  • There were no outstanding needs to contact EF. The tour and flights were all booked correctly.
  • Good density of places to visit. The tour’s itinerary was loaded with many points of interest, yet there was no feeling of exhaustion. I took around 900 photos by the conclusion of the tour.
  • Excellent cost-effectiveness. It’s difficult to beat EF in terms of pricing, especially in how they provide a fairly solid estimate with one big price tag.
  • Tour guide knew his history very well, even if he was unable to explain it fluently. You could ask him about the history of a specific point of interest, and he could tell you very precisely its roots, whether they be from the Meiji, Edo, or Tokugawa period.
  • Every dinner was authentic Japanese food. No exceptions.

Criticism

  • Tour guide had poor command of English and was extremely difficult to understand. In Japan, “Engrish” is very common, and it’s admittedly very difficult to find someone who can speak English fluently and correctly. However, this really reveals that you get what you pay for: if you want a cheapo tour, you will get a cheapo tour guide who might not be all you wanted. I will reiterate this: he was not a captivating tour guide, and it took great effort to try to absorb the information he was disseminating.
  • Little time spent in the actual points of interest, possibly due to an inefficient use of the tour bus. In many cases, it’s cheaper and faster to use the subway to get to places, although I concede that the tour bus is useful in times where one wants to see the area that leads up to an important or unfamiliar destination. Still, on the worst day, we were on the bus for a cumulative three hours, yet we only had around forty to fifty minutes per point of interest. No wonder I took so many pictures, as the tour felt rushed and didn’t give me time to take in the view before we had to get back in the bus to go somewhere else.
  • Miscommunication with EF during the tour. We were promised two people to a room on the first hotel, but instead were assigned three to a room. The arrangement wasn’t that bad after all, but it still contradicted the claims made in the travel meetings. What’s more, we were informed something about an EF group from Las Vegas that would be merging with our group, but this also never happened (they toured separately from us, but we encountered them occasionally).
  • Reversed tour. There is, in fact, fine print that EF is allowed to do this if reversing the tour would save money, but it’s still unpleasant and detracting from the intended experience. My group leader, who is a native speaker I know very well, told me before the tour that she was irritated from the reversal, since it’s much better to start from Tokyo, the modern part of Japan, and work one’s way southward to the more traditional Kyoto.
  • The last day of the tour was poorly planned by EF, so our group leader had to change the itinerary of that day (well before the tour, obviously) to some significantly better plans. Originally, the whole day would have been basically hanging around in Ueno Park, but she changed that to going to Tokyo Skytree, Hongwanji Temple, the Tsukiji fish market (which is moving elsewhere very soon), and the Edo-Tokyo Museum. We had to foot the bill for the attractions of this day, including Skytree, the museum, and 100 grams of toro (fatty tuna).
  • Poor distinction between what is already paid by EF and what we would have to pay for in addition to our tour. For instance, some of our subway tickets were already bought ahead of time by our tour director, but some we had to pay for with our money, which doesn’t really make sense because all of the transportation was supposed to have been covered by the tour cost.
  • Our group leader (and her husband and kids) ended up doing most of the work, especially rounding up everyone and ensuring that they are all present.
  • Less time than you would expect to spend your own money. After all, they want the tour to be educational, rather than just general tourism. But the interesting part was that we had to vote to go back to Akihabara, because we were only given two hours (including lunch!) to buy the games and figurines we had always wanted to buy from Japan. Even after the small petition, the final decision was to make Akihabara and Harajuku mutually exclusive, which means that you could only choose to go to one or the other. I decided to just go to Harajuku purely because I’d feel guilty if I didn’t stick to the original plan, but I regret the decision in retrospect because I ended up buying absolutely nothing there. (They just sell Western clothes in Harajuku, so you’re a Westerner buying used Western clothes in a non-Western country.)

There are probably quite a few number of points I am missing here, but this should be sufficient to give you an idea of the specifics of the tour that are not covered in the generic “it was really great and I had a lot of fun!!” reviews.

As a recent high school graduate, I’ll be looking forward to my next trip to Japan, but this time with another travel agency that provides more transparency in terms of itinerary and fees. I’d also be predisposed to spending more money to get a longer and better quality tour that actually gives me time to enjoy viewing the temples and monuments, rather than frantically taking pictures to appreciate later.

On Aseprite

Once upon a time, my uncle wanted to give me Photoshop CS5 4 as a present for my tenth birthday. However, as he did not bring the physical box along with him when he visited (he was a graphic artist at the time), he ended up installing a cracked copy when I wasn’t on the computer. I kept whining that it was illegal, that he couldn’t do that and now there were going to be viruses on my computer, but he explained calmly that there was no other way since he didn’t have the CD with him. So I said okay, vowing I’d uninstall it later, but after a while of using it, it kind of stuck, and no malware appeared (to this day, it is to my surprise how he managed to find a clean copy so quickly). The only condition, as he stated, was that I could not use Photoshop for commercial use – basically, you can’t sell anything you make with this cracked Photoshop. Fair enough.

Even so, I steered away from Photoshop, as anything I made with it felt tainted with piracy. Later, I’d use it a little more, but I placed little investment in learning the software, as I had made no monetary investment in the software at all. I used Paint.NET instead, and despite its shortcomings (no vector mode, no text layers, half-decent magic wand, no magnetic lasso), the shortcuts felt familiar and the workflow remained generally the same as that of Photoshop. People also recommended Gimp as “the only good free alternative to Photoshop”, but I didn’t like Gimp because literally every shortcut is different, and the workflow is likewise totally different. The truth was that Photoshop was Photoshop, and Gimp was Gimp.

Yet I sought to do pixel art. This was supposed to be an easy endeavor, but Paint.NET was an annoying tool. Eventually, I found David Capello’s Aseprite and had no trouble adapting to the software, as it was designed for pixel art.

I had few complaints, but they had to be dismissed; after all, this was software still in the making. Only relatively recently was symmetry added, and the software was made more usable. I also liked its $0 price tag – if you were competent enough to compile the binaries yourself. And because the software was GPL, you could even distribute the binaries for free, even though Capello charged money for them. Capello was happy, and the FOSS community was happy. Some even tried setting up Aseprite as an Ubuntu package in universe, although it generally wasn’t up-to-date, due to stringent updating guidelines.

Until the day Capello decided to revoke the GPLv2. I knew the day was coming and wasn’t surprised when the news came. Plop, the old GPLv2 came off and subsequent versions were replaced with a license of his making, forbidding distribution of binaries and further reproduction. The incentive of making pull requests to add features was gone – after all, you were really just helping someone out there earn more money, as opposed to contributing to a genuine open-source project. Of the 114 closed pull requests, only 7 are from this year (as of the time of writing).

In fact, the entire prospect of Aseprite continuing as an open-source project collapsed, for Capello had bait-and-switched the FOSS community to support his image editor because it was “open source,” without informing clearly of his ulterior motives to drop the license in the future. Licensing as GPLv2 was, after all no mistake as opposed to choosing GPLv3 – perhaps this had something to do with being compatible with Allegro’s license, or more permissibility for other contributors? No. This had to do with a clause that GPLv3 had, but GPLv2 did not: the irrevocable, viral release of one’s code to the open-source realm. Without this important clause, and because he was the owner of the code, Capello could simply rip off the old license and slap on a more proprietary one, which is exactly what he did.

The argument in defense of Capello was, “Well, it’s his software, he can do whatever he want.” After all, he was charging for the program, anyway. But the counterargument is that the GPL is intended by the Free Software Foundation to promote the open-source movement, not to deceive users into thinking your for-profit project upholds the ideals of free and open-source software, especially that open part: free as in freedom, not just free as in beer. Now there is not only a price tag on the product, but also a ban on distributing binaries, thanks to this incredible decision to make more money.

Yes, I know someone has to keep the lights on. You can do that in many ways, but one of them is not by turning your “open-source” project into downright proprietary software. Now, people demand more and contribute less – why should they pay when there are less results and less features being implemented? The cycle of development decelerates, and putting money into Aseprite is now a matter of business rather than a matter of gratitude.

I don’t remember how to compile Aseprite at this point. I remember it being mostly a pain in the butt having to compile Skia, but that’s about it. Thus, I have no more interest in using Aseprite.

Entering college, Adobe is offering absolutely no discounts on its products. It’s almost as if they want kids like me to go ahead and pirate Photoshop again. There is no way I am going to afford a single program with the price of an entire computer. Yes, I know, Aseprite is obviously cheaper than Photoshop, but why should I buy a pixel editing tool when I can get something that can do all kinds of image manipulation?

A slap to the face goes to the general direction of Adobe and David Capello. Good job for keeping the image editing market in the status quo.

On Arduino

This is not intended to be a full explanation of Arduino, but rather an address of some misconceptions of what Arduino is and what it’s supposed to be. I am by no means an expert and I use an Elegoo Uno (which is an Arduino knockoff), because I am a cheap sore loser.

Arduino is intended to be an accessible, ready-to-use microcontroller kit for prototyping. For cost reasons, the designers decided to use an Atmel AVR/ATmega8/168/328(p).

Now that we know this, let’s get into the misconceptions.

“Arduino is Arduino”

Meaning that Arduino is its own thing and you can’t use anything to replace it. No. Arduino is simply a PCB containing:

  • the microcontroller you want to use
  • an accessible way to get to the pins supported by the microcontroller
  • an external clock crystal you can swap out
  • a couple of fuses so you don’t burn your toy out from playing with the leads
  • a USB controller for easy programming (which actually might turn out to be more powerful than your target microcontroller)
  • USB/12V ports
  • Firmware that facilitates easy programming for the target microcontroller

You could rig your own programmer for your target microcontroller, solder everything yourself, but you’re missing the point. It’s for convenience. Any manufacturer can make “Arduino”-like kits and they’d work great anyway.

Arduino IDE is the only way to program the Arduino

Wrong again. This is actually the most rampant misconception out there. Actually, Arduino IDE is a horrible “IDE” if you can even call it that. It is quite literally a Java application with the Processing user interface (because Arduino was taken from Wiring, which in turn was based off Processing). When you compile something, it just executes a preprocessing script that takes your code and slaps on some standard headers, then it invokes the prepackaged gcc that actually does the heavy lifting. When you upload something, it invokes avrdude with the COM port you chose in the context menu and wow, magic!

If you want, you can make your own Makefile or CMake configuration that invokes all of this. I actually recommend this choice, because then you are free to use any text editor of your choice.

Arduino uses its own programming language

“Wow it has classes, it must be Java!” “Hmm, it could be Processing.” Nope, it’s C++. The only thing it doesn’t have are exceptions, and that’s just because the AVR wasn’t designed with any exception handling capabilities at all. So, every time you read an “Arduino Programming Language” tutorial, you’re actually being deceived into writing ugly C++ code. Take a small breath, and realize you’ve been passing your big objects by value instead of by address all along. Use pointers.

ATmega328 is like any other processor, but smaller

Except it’s not. It’s an 8-bit RISC processor with a tiny instruction set with somewhere around 16 MHz of clock speed, which is marginally better than the clock speed on a Zilog Z80. Even with a very powerful language at your disposal, you still have to optimize code.

Anyway, I’m tired and I’m out of ideas for what to write next.

Tape drive VCR, part 1

One day I had this amazing idea! I was looking through the tape drives for sale, and as usual they were over $1,200 for LTO-5 or LTO-6 tape drives, which are the only generations that can match the current hard drive market. There are so many unused VHS tapes, and with the untapped potential of analog storage media, you could store digital media in these cassettes! After all, they’re just tapes! You could make… a tape drive using a VCR!

All right, I think you’ve got the sarcasm and naivety of my thought process. I mean, if you think about it only for a few seconds, it’s just silly humor. But when it remains within your mind for days on end, wondering whether or not it truly is possible, you feel as if the only way to find out is to try it yourself.

Let’s take a closer look at this incredulous idea. The first and only popular stab at this was ArVid. It was basically this Russian ISA card that ran composite video to your VCR, and that was it. It could store data at a speed up to 325 kbps, and with some simple math we come up to almost exactly 2 GB on an E-180. And you know what, a lot of people said “yeah, I guess that’s reasonable,” and they stopped there.

But there are some huge limitations to ArVid, that could have allowed it to increase in data retention. First, it has only two symbols: luma on and off (!!!), which already makes the storage incredibly inefficient! It uses some Hamming for ECC but that’s about it, according to Wikipedia. Now, I’m no expert here on signal processing (just started seriously reading about this an hour or two ago), but with QPSK or QAM, we can make it significantly more efficient. So, screw ArVid.

We also don’t need an additional card to bring the analog data over to the VCR. We can use the sound “card” that is already built into the motherboard to produce the analog signals we need, and at an acceptable sample rate too (while “sample rate” doesn’t exist when we’re talking about pure analog signals, we do still need to convert digital signals over to analog, but the sound card can only support up to 96 kHz or 192 kHz, thereby limiting our symbol rate). A separate sound card might still be convenient, however, given that this method may hinder a user’s ability to use sound at all (or the user may accidentally trigger a system sound that interferes with the data throughput).

So, how much data exactly do we think a VHS can carry? I think that in a perfect world with an ideal design, it will be somewhere between 80-160 GB. However, formal calculations based on the modulation to be used will be required in order to prove this, so I will not talk much about it.

Instead, I’ll discuss the practicality of this design. Yes, you could hack a remote control and stick it to the VCR, and that would be the interface for communication. Haha! But to be honest, I’m not really willing to destroy my VCR and remote just to figure out how well this is going to work. The solution, then, becomes fairly clear: just have the user be instructed on what to do. The user would note where a datum is stored and all he would do is just move the head right before it and hit “read” right before the data is reached. The signal would be aligned and processed perfectly.

Alternatively, we can tell the user to “initialize” the VHS by having the software sprinkle position markers across the tape. They don’t have to be exact placements, but they give the software an idea of what spaces have been consumed and where to go based on the last read position marker, assuming that the software is tracking where data has been stored in some sort of external master file table. This can then be turned into simple “rewind for about 20 seconds” commands given to the user. The user would play back a little bit, which would allow the software to give feedback on how close they are to to the data (and if actual data is being played back, then this should be detected and the user should be instructed to go back to the beginning of the data).

I’ve been taking a look at GNU Radio and I think this should give me a fair estimation of what modulation method(s) to use, and how much noise is expected. We’re dealing with VHS, which is great, because the expected noise is extremely low.

Soldering

The big problem of soldering is the resources. If you don’t have the right materials, the right solder and the right flux, you’re going to end up botching the whole thing like I did.

View post on imgur.com

It was fairly obvious that I was going to mess up. But hey, you know what they say: if you must fail, fail spectacularly!

Oh well, eventually I’ll have this 20×4 LCD set up and wired to the Elegoo Uno R3 (an Arduino/Genuino Uno clone). Unfortunately, I don’t have those easy-to-solder pins, which is why I have to do this ugly hack soldering the cables in. Hopefully the LCD doesn’t turn out to be destroyed by the heat.

Zero-gravity soccer – part 1

A few weeks ago I was assigned a final project. The final project could be anything as long as it’s written in Python. So I chose to make a game.

And so the mad scramble began. Actually, it wasn’t really a mad scramble at all. I took my time with the code, working on it only when I was able to do so. And so without the distractions of my brother, I was able to knock out 8 hours of coding today, which equates to 570 lines to check into source control.

Python is an incredibly addictive language. I thought it was just some simple language for kids; boy, was I wrong. It is a language of elegance, of minimalism. It makes Java look like a rusty pipe under a sink (which it is, for the most part). Say goodbye to curly braces and excess if statements. And bugs are incredibly easy to find, even without an IDE, if there are any in your code.

Python does have its shortcomings, however. Its object-oriented design isn’t exactly something familiar, and the mechanics of it are definitely not explicit. Still, it allows for multiple inheritance along with a degree of control you could never have with Java. In Java, you had to make a rigid model of the class before actually implementing it, and changing constructors around leads to problems down the line fairly quickly. In Python, however, you can build the implementation first, and then make an object encasing that behavior. It is purpose-driven rather than enterprise-driven, and so it works extremely well for small projects.

This is what I’ve been able to accomplish so far. I have until the 20th to “ship” the project, if you will, and I’m quite satisfied with the progress so far. I estimate it will only take 500-750 more lines to bring it to a playable state, but then again, I cannot make a fair estimate of line count because it’s not really what matters. I need to implement network, HUD, and some game-specific behaviors like grabbing the ball and throwing it to the goal.

I shall press forward…