Making an e-bike with display

This is an explanation of another one of those ambitious projects which I really want to do, but I have neither the experience nor the people to actually do it with.

I hate rough inclines: they kill my legs. The number one detractor to riding a bike in my childhood was that in my neighborhood, there are some very steep inclines. It made riding a bicycle not a very pleasant experience, and my father never wanted to bring me to a park for me to ride my bike on, so in the end, I never really used my bike.

However, given the fact that using a bicycle is the only practical mode of rapid transit in the city where I attend college, I want to actually start riding a bike again. And after a year or so of riding that bike, I want to make the riding experience cooler.

First, I want to retrofit a brushless DC motor to the drive shaft; something rated for around 600 W of power output. If it is not possible to attach it directly to the hub, I’ll attach it to the shaft with a belt; ideally, a belt with the quality of a timing belt. But I hope I don’t have to do this, because if so, I’d have to play with the tension, pitch, and so on of the belt, which would be problematic.

Next would be the electronic speed controller and charge controller. I want the controllers to automatically switch to a regenerative mode for slight brakes by bypassing the ESC, inverting the poles of the motor, and taking the current straight to the charge controller. Then, on pedaling, the controllers should switch back to drive mode. This behavior would be directed by the main controller, since regenerative braking is a non-essential feature.

Speaking of a main controller, what exactly is it? The main controller is the Arduino or whatever microcontroller I decide to use that is wired to the ESC and charge controller, but is not required to be run in order to operate the bike in case of a fatal error or low battery charge. It would run a real-time operating system with prioritized continuous tasks and many, many interrupt routines. These would be its high-level tasks, in order of descending priority:

  1. Emergency brake applicator. Continuously checks “emergency stop” button, dead man’s switch (clipped to clothes, but the clamp is limited enough such that it cannot be clipped to the handlebars or other part of bike; then the other end of the clamp is magnetically attached to a port on the control box), or >95% application of brakes while moving at a formidable speed.
  2. 10 Hz alternating pulse. This signal is generated and passes through some kind of failsafe circuit, which then determines whether or not the ESC should be enabled. The alternating pulse ensures that the main controller is not “frozen” on an operation that could prevent it from stopping the motor. This assumes that as long as the pulse is alternating, the controller is working as intended.
  3. Speedometer. It simply samples the speed at which the back wheel is spinning and determines the current speed.
  4. Speed regulator. This task scales back the output DC current based on how close the bike is to the speed limit. This task can be overridden, but it’s not a good idea to do so.
  5. Brake detector. This task detects the brake application percent. The actuation of the brakes is completely analog, but if it is significant, the main controller can signal to go to regenerative mode.
  6. Pedal detector. This task simply detects how much positive force is being applied on the pedal and sets the target DC current proportional to this force (clamped, of course).
  7. Odometer. It uses the same sampling metric as the speed counter, but it increments the distance by the circumference of the wheel. After around .2 miles, it writes to the EEPROM. I suppose I could use a pointer to level the wear on the flash, or I could use a preexisting file system designed specifically for microcontrollers.
  8. Display driver. This assumes that there exists a layer of abstraction between the UI and the display itself.
  9. Sound driver. Just for basic beeps and boops.
  10. Main UI. This handles button interrupts (the calls of which are passed to the foreground user task), the failsafe UI (if all user-mode UI tasks are dead), and the UI toolkit itself.
  11. Foreground user task. Dashboard, options, etc. Must not directly control motor operation.
  12. Background user tasks. Battery icon, clock, etc. Must be non-critical.

The e-bike’s main controller would require a key for operation and then a simple on/off SPST switch located in front of the handlebars. The display would ideally be a Hitachi HD44780-esque LCD, but it could also be the Nokia-style LCDs, although these might be a little too small. There will be six buttons: on the left below the display, there will be four directional buttons laid horizontally (in a style familiar to Vim users or Dance Dance Revolution/StepMania players), and on the right, a back button and an enter button. The display and controls need to be water-proofed.

Instead of using heavy deep-cycle lead-acid batteries, I’d just opt for using LiPo cells, which are ubiquitous in hobby usage for high-performance electronics. Industry professionals are not fond of LiPo cells because they are comparatively more dangerous and volatile than other types of cells, and this increased risk cannot be tolerated in mass production. However, since I am not mass-producing e-bikes, it should be OK to accept the risks and enjoy the power of lightweight LiPos, as long as their charging is supervised closely.

This e-bike also needs a brake light, signal lights, and an LED headlight with a white color temperature rather than blue.

That’s all I want the bike to do. All of this, but I want to keep it street-legal and be able to prove that it can be safely ridden in busy streets under the consideration of various fail-safe mechanisms, including a speed regulator that requires manual override.

Sadly, I don’t know if I will ever be able to make this contraption.

Where’s the good backup software?

For *nix users, the answer is easy: rsync. For Macintosh users, the answer is even simpler: Time Machine (“time ‘sheen”). For Windows, the answer is a convoluted mess of choices. And the problem is that none of those choices give everything you want.

Why can’t you have everything? Here’s all of the things a backup program needs:

  • Permissions. If you can’t preserve your metadata, forget about making faithful backups. POSIX and Windows permissions are very different, but they still deserve the same love.
  • Resilience. The restore part of a program should never produce a fatal error, unless a backup has been corrupted beyond repair. If a part has been corrupted, ignore that part, notify the user that a corrupted portion was ignored (noting, of course, what the corrupted portion actually is), and continue with the restore process.
  • Compression. Many would argue that compression only makes the backup more difficult to restore, yields a minimal return in efficiency, etc. However, this can make a large difference when uploading from a personal home network to a storage service, where storage costs are billed by the gigabyte. I don’t know about you, but $1 a month was more than my tax return this year.
  • Encryption. Everyone’s got their tinfoil hats on, how about you?
  • Incremental backups. People are not going to do full backups every week. This is a waste of time, storage space, and energy, since most files would be redundantly stored.
  • Block-level. If you modified a 20 GB VHD file, are you going to copy that whole thing on every weekly incremental backup? No, you’re going to copy the differences in blocks/parts of that file.
  • Archivable. It appears most people choose either image-based backups or file-based backups. I personally prefer at the file level, but this should not mean “copy millions of files and spew them on the target directory.” The backup should be neatly organized in, say, 50 MB parts that can be easily uploaded to a cloud service as part of a future backup plan. Or, it can just be made as a monolithic 800 GB file. The former is workable by most consumer file services, while the latter is most convenient for more enterprise-oriented services like Amazon Glacier.
  • Resumable. Most backup programs hate it when you shut down your computer for the night. Yet none of them seem to understand that this is exactly what shadow copies are for. Even after shutting down the computer, shadow copies do not magically change. Yet the software goes, restarts your entire backup, and creates yet another useless shadow copy for the mere sake of not wanting to touch files in use and making the most up-to-date backup possible.
  • Snapshots. Let’s say I don’t want to restore my whole computer; I just want to see an old file and its version changes over time. Most backup programs will not let you do that, citing that it is “too complex.” No, it’s not. Track the files the software backed up, using a tiny database like SQLite. There, you can store checksums, file sizes, previous versions, and so on and so forth. The suffering ends there. The end user can view a snapshot of the computer at a certain point in time, or view the history of a specific file, perhaps with diffs (binary diffs if the backup software is user-friendly enough).
  • Low profile. What is CloudBerry Backup using 2.7 GB of memory for? Just flapping around? No! Decent backup software should use 100 MB of memory, tops. Leave the heavy RAM consumption to browsers, games, and servers.
  • Integration. This backup software should be robust enough to make anything either a source or a destination for backups, notwithstanding the limitations of each backup medium.
    • Least liquid: Offline local storage; Amazon Glacier; Google Coldline
    • Somewhat liquid: FTP (due to its slow transfer speed of many files and inability to perform multipart transfers); most consumer storage services
    • Most liquid: iSCSI SANs; high-availability storage services
  • Drive path-agnostic. A backup software should never, ever depend on drive letters to figure out backup sources and targets.
  • Predict drive failure. This goes somewhat beyond the scope of a backup software, but there should be at least some kind of periodic SMART monitor to inform and warn a user of a drive that is indicating signs of failure. Yes, put a big popup on the notification bar with a scary message like “Your drive might fail soon” or just outright “Your drive is failing.” Show it to them the first three days, make it go away, and then show it to them the next week. Of course, the notification can be removed for a specific drive, but it will require them to read a message about possibly losing data on the failing drive, wait 5 seconds to close the dialog, and now they never have to see the dialog for that drive again.
  • Recognize cache folders. Here’s what you need to do: just stick that CCleaner scanning stuff into your product. Make the default backup plan ignore whatever CCleaner would usually clean up. Caches can add up to be gigabytes of size, and many users do not even care about including them in their backups, because all they want are their programs and documents. However, there is that one company that might say, “no you can’t ignore cache folders because we need a perfect file-level backup of the system tree.” (My argument would be to use CloneZilla and do it at the image level – but fine.)
  • Import from other services. No, I don’t care much about Acronis, Veeam, or other proprietary solutions. What I do care about, however, are the crappy Windows 7 Backup and Restore backups, dd “backups,” and other image-level backup formats. Don’t just import the backups: import file history, recompress them, preserve timestamps. Give them the full treatment, and put them neatly in the new backup format as if it really were an old backup.
  • Responsive (and responsible) backend. Big enterprise backup software uses a UI frontend, which merely communicates with the service backend. This is generally a good design. However, when the backend decides to quit, the UI frontend goes into limbo and does not respond to any commands, instead of providing a reasonable explanation to what is happening with the backend, while the backend does not attempt to halt whatever blocking operation that is taking too long. The gears just grind to a halt, and nothing can get done on either side.
  • Don’t delete anything without asking. No, I don’t even want an auto-purge functionality, and if you do, for the love of God, make it a manual operation. There is no reason to keep purging things constantly, unless you have a disk quota to work under – in that case, the software should determine what is best to purge (start with the big stuff, at the earliest backup) to meet the size requirement.
  • Only one backup mode. That backup mode better be good, and it should have a hybrid format.
  • Open-source format. The software itself may not be open-source, but you are essentially ensuring that someone out there can make a restore software that can always be compatible with the latest and greatest operating systems.
  • Bootable. Where are you going to make your restores from? A flash drive running Linux with an ncurses interface for your backup software, obviously. You could, of course, allow backups from that same bootable drive, in the case of an infected drive or as part of a standard computer emergency response procedure – but eh, that’s really pushing it. Just restores will do fine.
  • Self-testable. Make sure the backups can actually restore to something.
  • Exportable. One day, your backup software will not be relevant anymore, so why bother locking in users to your format? Make it so that they can export full archives of their backups, with a CSV sheet explaining all of the contents of each archive.

At the end of the day, users just want their files safe and sound, so keep the software as close to the fundamentals as possible, and allow others to make tools around the backup software if additional functionality is needed.

Paranoia about the eclipse

Here it is in TL;DR format:

  • If you didn’t spend $500 on the latest ISO for this exact solar eclipse event, don’t use equipment thinking that it “blocks the dangerous solar rays.”
  • When the Moon passes over the Sun, the Sun becomes an ultra-hot beam of plasma ready to annihilate anything that it touches.
  • You are an idiot because you are a non-professional who tried to look at the Sun.
  • Don’t look at the Sun or your eyes instantly bulge out of your eyesockets and explode.
  • $100 for eclipse glasses? Well, it’s only for a few minutes, and they make looking at the sun safe, so I think they’re worth the price ;)))))
  • Stay indoors because the zombies are coming.

When I was a kid, I used to look at the Sun for a second or so at a time. Did it make me a better person? No, but my vision was unaffected: I still do not wear glasses to this day. I can’t say the same thing about these days. My eyes have become older, and when I do look at the Sun, it forms spots on my eyes where the Sun was, and the spots linger for a few minutes until they consume themselves.

If you want real advice, go here: http://web.williams.edu/Astronomy/IAU_eclipses/look_eclipse.html

Ideas for a new operating system

As I was watching Druaga compare the Windows system folder with the Mac system folder (which is probably just a fake frontend to a really ugly backend), I suddenly began to pace around, thinking about that graph-based file system again. I also thought about the “organization” defined by POSIX: is /usr/, /var/, /etc/, /opt/, /lib/, etc. really understandable? There’s clearly a conflict here: we want an organization that caters to both readability by the user, the core system, its components, and applications.

I speculate the creation of a new operating system in the next generation. Currently, I believe that it is nearly impossible for a new kernel to be created due to the excess complexity that semiconductor companies have thrown into their electronics, rendering operating system support for PCs effectively exclusive to Linux and Windows since those are the only two systems that they really test.

Why do we need a new operating system? In truth, we really do not. The conservatives will say, “Then why go into so much effort making an operating system that nobody will use, when there already exists one that works?” I’d ask the same thing about GNU Hurd, ReactOS, and so on.

It’s for the future. You see, there is a fatal flaw in the current operating system organizational architecture: it’s a limited directed graph that surmounts to a tree. It works under the premises that system data can be organized as papers and files inside folders. But the truth is that such data can be organized in various ways, but still not necessarily in a hierarchical or tag-based structure.

An undirected graph-based file system would work like the brain, and using more fundamental pieces of data that could allow the cluster size to go down to perhaps 2K. It would be incredibly difficult to visualize, but what you could still do is place sections of this data system in different physical locations, such as a server.

(more…)

A visit to the Googleplex

After doing a thing with Google for the summer with a team of college, 150 or so of us were given an all-paid-for trip to the Google main headquarters in Mountain View, CA, for having completed the primary goals of the coding project.

It is certain that there are a very few number of individuals that get this opportunity. If you were just a kid, you’d be jumping up and down, but we are mature individuals (and broke college students) and know better than to get our hopes too high.

Because we were not informed at all that we were forbidden from disclosing any part of the trip, I can make full disclosure – well, at least the most interesting parts.

(more…)

Japan: the hyperfunctional society: part 1

This is intended to be a complete account of my events in an eight-day trip to Japan, which had been planned for about two years by my native-speaking Japanese teacher, was organized by an educational travel agency, and included 26 other Japanese students with varying levels of knowledge.

Names have been truncated or removed for the sake of privacy.

After many intermittent lapses in editing, I decided to just split it into two as it was getting increasingly difficult to get myself to finish the narrative, but at the same time did not want to hold back the finished parts. I am not intending to publish this for money or anything like that; please excuse my limited vocabulary and prose during some dull parts. (more…)

Domain change

After an entirely unexpected drop of the extremely popular homenet.org domain (yes, visitors from Google, “homenet.org is down”!), it became impossible to reach the website via longbyte1.homenet.org due to an unreachable path to FreeDNS. Thus, I decided to just finish moving to n00bworld.com. It took a while to figure out how to get WordPress back up and pointing to n00bworld.com, but I eventually succeeded.

What I do not know, however, is if I will succeed in finishing the account of the Japan travel. I have been putting that off for too long now. Ugh.

Internet

Without the Internet, I would never have amassed the knowledge I currently hold today. The wild success of the knowledge powertrains of Wikipedia and Google fail to cease captivating users into learning something new every day.

Yet, I loathe the Internet in numerous ways. It’s become what is virtually (literally virtually) a drug habit, and in a way worse than a drug habit because I depend on it for social needs and information. Without it, I would lose interesting, common-minded people to talk with, as well a a trove of information that I would have to buy expensive books for.

But without the development of the Internet, what would humanity be…? I suppose we would return to the days where people would actually be inclined to talk face-to-face, invite each other to their houses, play around, sit under a tree reading a book, debug programs, go places, make things. It wouldn’t necessarily be a better future, but it would certainly be a different one. If it took this long to develop the Internet (not very long, actually), imagine the other technologies we are missing out on today.

And then there is the problem of the masses. The problem lies not in the quantity itself, it’s that attempting to separate oneself from the group merely attempts to imply elitism. And you end up with some nice statistics and social experiments and a big beautiful normal model, with very dumb people on one end and very intelligent people on the other.

This wide spectrum means that conflict is abound everywhere. People challenge perspectives on Reddit, challenge facts on Wikipedia, challenge opinions on forums, challenge ideas on technical drafts and mailing lists. And on YouTube, people just have good ol’ fistfights over the dumbest of things.

On the Internet, the demographic is completely different than in human society, even if the Internet was supposed to be an extension of human society. The minority – yes, those you thought did not exist: the adamant atheists, the deniers, the libertarians, the conspiracists, the trolls – suddenly become vocal and sometimes violent. The professionalism with which the Internet was designed in mind is not to be found on any of the major streams of information. This is not ARPANET anymore. These are not scientists anymore studying how to run data over wires to see if they can send stuff between computers. These are people who believe the Internet is freedom at last. Freedom to love, freedom to hate; to hack, to disassemble, to make peace, to run campaigns, to make videos, to learn something, to play games, to make opinions, to argue, to agree, to write books, to store things, to pirate software, to watch movies, to empathize, to converse, to collaborate, or just to tell the world you really hate yourself.

Thus, I am a victim of freedom and a slave to it. My friends do not talk to me anymore. I am just left with solitude and a keyboard.

Some ideas

Concept of AI itself

I’ve glanced at many papers (knowing, of course, that I know very little of their jargon) and concluded that the recent statistical and mathematical analysis of AI has simply been overthought. Yet the theory of AI from the 70s and 80s delves to entirely conflicting perspectives of the driving force of AI in association with the morality and conscious factors of the human brain.

Think about the other organs of the body. They are certainly not simple, but after 150 years, we’ve almost figured them out, how they work mechanically and chemically. The challenge is how they work mathematically, and I believe that an attempt to determine an accurate mathematical representation of the human body would essentially lead to retracing its entire evolutionary history, up to the tiny imperfections of every person across each generation. Just as none of our hands are shaped the same, our brains most likely are structured uniquely, save for its general physical structure.

I conjecture that the brain must be built on some fundamental concept, but current researchers have not discovered it yet. It would be a beautiful conclusion, like the mass-energy equivalence that crossed Einstein’s mind when he was working in the patent office. It would be so fundamental that it would make AI ubiquitous and viable for all types of computers and architectures. And if this is not the case, then we will adapt our system architectures to the brain model to create compact, high-performing AI. The supercomputers would only have to be pulled out to simulate global-scale phenomena and creative development, such as software development, penetration testing, video production, and presidential-class political analysis and counsel.

Graph-based file system

Traditional file systems suffer from a tiny problem: their structure is inherently a top-down hierarchy, and data may only be organized using one set of categories. With the increasing complexity of operating systems, the organization of operating system files, kernel drivers, kernel libraries, user-mode shared libraries, user-mode applications, application resources, application configurations, application user data, caches, and per-user documents is becoming more and more troublesome to attain. The structure of POSIX, in the present, is “convenient enough” for current needs, but I resent the necessity to follow a standard method of organization when it introduces redundancy and the misapplication of symbolic links.

In fact, the use of symbolic links exacerbates this fundamental problem of these file systems: they work on a too low level, and they attempt to reorganize and deduplicate data, but simply increasing the complexity of the file system tree.

Instead, every node should be comprised of a metadata as well as data or a container linking to other nodes. Metadata may contain links to other metadata, or even nodes comprised solely of metadata encapsulated as regular data. A data-only node is, of course, a file, while a containerized node is a directory. The difference, however, is that in a graph-based file system, each node is uniquely identified by a number, rather than a string name (however, a string name in the metadata is to be used for human-readable listings, and a special identifier can be used as a link or locator of this node for other programs).

The interesting part about this concept is that it completely defeats the necessity of file paths. A definite, specific structure is no longer required to run programs. Imagine compiling a program, but without the hell of locating compiler libraries and headers because they have already been connected to the node where the compiler was installed.

The file system size could be virtually limitless, as one could define specifics such as bit widths and byte order upon the creation of the file system.

Even the kernel would base itself around the system, from boot. Upon mount, the root node is retrieved, linking to core system files and the rest of the operating system; package management to dodge conflicts between software wouldn’t be necessary, as everything is uniquely identified and can be flexibly organized to correctly define which applications require a specific version of a library.

In essence, it is a file system that abandons a tree structure and location by path, while encouraging references everywhere to a specific location of data.

Japanese visual novel using highly advanced AI (HAAI)

This would be an interesting first product for an aspiring AI company to show off its flagship “semi-sentient” AAI product. Players would be able to speak and interact with characters, with generated responses including synthesized voices. A basic virtual machine containing an English and Japanese switchable language core, a common sense core (simulating about ten years’ worth of real life mistakes and experiences), and an empathy core (with driver, to be able to output specific degrees of emotion) should be included in the game, which developers then parametrize and add quirks for each character, so that every character finishes with a unique AI VM image.

In fact, the technology showcased would be so successful that players would spend too much time enjoying the authentic human-like communication, getting to know the fictional characters too well, warranting the need to place a warning for players upon launching the game (like any health and safety sign) stating that “This game’s characters use highly advanced artificial intelligence. No matter how human-like these fictional characters interact, they are not human beings. Please take frequent breaks and talk to real, human people periodically, to prevent excessive attachment to the AI.”

EF review for Japan

They said they’d be posting my review “this fall,” which I guess implies that they screen and censor each review for any personal information. Also, I had to write the review in a tiny textbox in Internet Exploder because it failed to work in any other browser, and when I go to the “write review” menu, it’s as if I had never submitted a review in the first place. What a horrible web infrastructure their website has.

I’ll post my full account of my experience in Japan in a few days, but for now, please enjoy my scathing three-star review of the EF tour. The country is great, but the tour was certainly not.


One cannot review the culture and aspects of a country; it is not something that can be placed stars on. You can choose any country that EF offers tours for and expect a great experience simply being present in a new environment with classmates. This part does not change with any educational tour or travel agency.

Thus, I will focus on primarily the tour itself, which is the part that EF specifically offers in competition with other travel agencies. I will cover praise and criticism by points rather than in chronological order.

Praise

  • There were no outstanding needs to contact EF. The tour and flights were all booked correctly.
  • Good density of places to visit. The tour’s itinerary was loaded with many points of interest, yet there was no feeling of exhaustion. I took around 900 photos by the conclusion of the tour.
  • Excellent cost-effectiveness. It’s difficult to beat EF in terms of pricing, especially in how they provide a fairly solid estimate with one big price tag.
  • Tour guide knew his history very well, even if he was unable to explain it fluently. You could ask him about the history of a specific point of interest, and he could tell you very precisely its roots, whether they be from the Meiji, Edo, or Tokugawa period.
  • Every dinner was authentic Japanese food. No exceptions.

Criticism

  • Tour guide had poor command of English and was extremely difficult to understand. In Japan, “Engrish” is very common, and it’s admittedly very difficult to find someone who can speak English fluently and correctly. However, this really reveals that you get what you pay for: if you want a cheapo tour, you will get a cheapo tour guide who might not be all you wanted. I will reiterate this: he was not a captivating tour guide, and it took great effort to try to absorb the information he was disseminating.
  • Little time spent in the actual points of interest, possibly due to an inefficient use of the tour bus. In many cases, it’s cheaper and faster to use the subway to get to places, although I concede that the tour bus is useful in times where one wants to see the area that leads up to an important or unfamiliar destination. Still, on the worst day, we were on the bus for a cumulative three hours, yet we only had around forty to fifty minutes per point of interest. No wonder I took so many pictures, as the tour felt rushed and didn’t give me time to take in the view before we had to get back in the bus to go somewhere else.
  • Miscommunication with EF during the tour. We were promised two people to a room on the first hotel, but instead were assigned three to a room. The arrangement wasn’t that bad after all, but it still contradicted the claims made in the travel meetings. What’s more, we were informed something about an EF group from Las Vegas that would be merging with our group, but this also never happened (they toured separately from us, but we encountered them occasionally).
  • Reversed tour. There is, in fact, fine print that EF is allowed to do this if reversing the tour would save money, but it’s still unpleasant and detracting from the intended experience. My group leader, who is a native speaker I know very well, told me before the tour that she was irritated from the reversal, since it’s much better to start from Tokyo, the modern part of Japan, and work one’s way southward to the more traditional Kyoto.
  • The last day of the tour was poorly planned by EF, so our group leader had to change the itinerary of that day (well before the tour, obviously) to some significantly better plans. Originally, the whole day would have been basically hanging around in Ueno Park, but she changed that to going to Tokyo Skytree, Hongwanji Temple, the Tsukiji fish market (which is moving elsewhere very soon), and the Edo-Tokyo Museum. We had to foot the bill for the attractions of this day, including Skytree, the museum, and 100 grams of toro (fatty tuna).
  • Poor distinction between what is already paid by EF and what we would have to pay for in addition to our tour. For instance, some of our subway tickets were already bought ahead of time by our tour director, but some we had to pay for with our money, which doesn’t really make sense because all of the transportation was supposed to have been covered by the tour cost.
  • Our group leader (and her husband and kids) ended up doing most of the work, especially rounding up everyone and ensuring that they are all present.
  • Less time than you would expect to spend your own money. After all, they want the tour to be educational, rather than just general tourism. But the interesting part was that we had to vote to go back to Akihabara, because we were only given two hours (including lunch!) to buy the games and figurines we had always wanted to buy from Japan. Even after the small petition, the final decision was to make Akihabara and Harajuku mutually exclusive, which means that you could only choose to go to one or the other. I decided to just go to Harajuku purely because I’d feel guilty if I didn’t stick to the original plan, but I regret the decision in retrospect because I ended up buying absolutely nothing there. (They just sell Western clothes in Harajuku, so you’re a Westerner buying used Western clothes in a non-Western country.)

There are probably quite a few number of points I am missing here, but this should be sufficient to give you an idea of the specifics of the tour that are not covered in the generic “it was really great and I had a lot of fun!!” reviews.

As a recent high school graduate, I’ll be looking forward to my next trip to Japan, but this time with another travel agency that provides more transparency in terms of itinerary and fees. I’d also be predisposed to spending more money to get a longer and better quality tour that actually gives me time to enjoy viewing the temples and monuments, rather than frantically taking pictures to appreciate later.