A visit to the Googleplex

After doing a thing with Google for the summer with a team of college, 150 or so of us were given an all-paid-for trip to the Google main headquarters in Mountain View, CA, for having completed the primary goals of the coding project.

It is certain that there are a very few number of individuals that get this opportunity. If you were just a kid, you’d be jumping up and down, but we are mature individuals (and broke college students) and know better than to get our hopes too high.

Because we were not informed at all that we were forbidden from disclosing any part of the trip, I can make full disclosure – well, at least the most interesting parts.

We – a group of three – were driven by a rather unprofessional Uber driver, who talked to us about his skepticism of the prevalence of all of these big tech companies all clustered in the one Silicon Valley, in a somewhat slurred speech (as if he just finished smoking weed on the way to pick us up from the airport). I mentioned to him rather cautiously that I had read that people working here were having a hard time even paying their house mortgages – he affirmed it completely, noting that even a small studio starts at $100k easily, and that there is no way to sustain oneself without sharing a room. I told him rather plainly that we were going to Google, perhaps a little too cheerfully. Promptly, he scoffed a little, warning us “not to get sucked into the black hole that is Google… [because] Google is not just a culture; it is your life.” I took his advice, as that of a local resident, with a significant amount of value.

On the afternoon of arrival, we are bused to the main headquarters of Google. (The driver is extremely aggressive to maintain a lead with the other buses ahead.) This visit is only possible, evidently, because it is a Sunday, and the campus is devoid of people. We take a very limited tour of the campus, and most of the time spent is actually on a small little park where all of the Android mascots have been placed. I conclude that there are probably not many interesting things around the campus, as it is very large (~20,000) campus where a vast multitude of regular office buildings have been helplessly consumed by this beast. I get to see the Google main main building, as in the one whose big logo is on Getty Images and gets used every time the press talks about some Google corporate action.

It’s no question why filmmakers decided to establish their businesses in Hollywood: perhaps it appeared to someone how picturesque California was, with its hundred-year-old tall mature pine trees, and decided to film here. It then appeared to me that filmmaking started with nature, not action movies.

During this afternoon visit, we are not really taken inside very much – in fact, we only go indoors to this one presentation lounge for a few minutes, and then sent back outside. There is probably too much at risk if they allow us anywhere beyond presentation rooms – after all, these are facilities for use by employees. There is a pronounced security presence, but it is not as prevalent as I thought it would be (no backpacks checked, no active checking of individuals, just standing around the perimeter ensuring that nobody strays away or someone sneaks in). There’s a great deal of games, and we get to meet some of the team that worked on the summer program activities. I suppose this meeting is more about socialization, which I am actually terrible at. I do my regular antisocial routine, where I walk around looking at people and waiting for someone to interact with me. If I’m asked to take a picture with someone involving some kind of Google-related prop, I just apologize to them, saying that I’m not a slave to social media.

The next day, we spend it all in a presentation room and courtyard area, in some other part of the Googleplex probably a mile or so away from the main headquarters. Clearly, it is because it is a Monday, and there is nowhere else we can go as a large group, since all spaces are being occupied by employees. We spend the day with more games and talks about diversity in the field, core Google products, 20% time, the 10x principle (which I still don’t really understand very well), a Q&A session with a panel of former members of the summer program (“does it matter what programming language you use for your code interview?” no, of course not, why would it? you should care more about the job rather than the interview) and a barbecue.

All in all, it was not very interesting. I thought we would pull out the laptops for something, but this never happened. Just one day full of talks about things, and a lot of food. Not like I haven’t endured “death by recruitment” before. (Company that starts with an “A” and ends with “T&T”, except their presentations were even more boring.)

I came here thinking to myself that there would be recruiters watching my every action, but there weren’t. The project advisers, of course, were present to talk and eat with us and things like that, but there was no one to immediately approach us for a job interview or anything like that, because that was not the purpose of the trip. They just wanted to show us the campus, and they spared no expense to provide us with this opportunity. It only proves that they can shell out any sort of money, including $100,000 for a day’s worth of flights from all across the US, and still yield a return on their investment.

One of the last things I heard before I left was that Google was receiving around 300 new employees (Nooglers) daily, yet the turnover was quite high, because it was growing at an exponential rate, and it was being limited solely by real estate (it chews up entire businesses to subsist!). Google is already around 80,000 strong (actual figures from a real employee) and will continue its growth in the coming future. And immediately, I thought of the Roman Empire. This growth is unsustainable; it will inevitably split up and break from the power struggle; people think it will last forever, but it will not. There will be more intelligent, less centralized ways in the future to find information, because Google currently owns a crucial part of the gateway to information: the search engine, your email, half the world’s videos – and what next?

Perhaps I would pursue Google for an internship, but certainly not for a full-time job. Working for Google will hinder me from accomplishing my life’s work due to non-compete and special clauses in their mischievous contracts. Moreover, if Larry Page and Sergey were able to do make a succeeding business, knowing absolutely nothing about what they were doing exactly, what stops me from pursuing that same spirit and passion of working in a garage as my office? Heck, it’s hardly even an entrepreneurial spirit – I’m not campaigning daily to get money that isn’t mine – it’s just plain old curiosity and the wish to bring that idea further.

They all started off as kids who didn’t know what they were getting into, and no one understood them, and they didn’t understand much about business and public relations. The naivete simply brought them forward as a stroke of luck. Is that it, really? A stroke of luck?

No. I’m not lured by all of these “perks,” the materialistic values that are merely used as tools to entice people into joining this empire. Look, they have already been causing localized inflation; why would I want to bring that same inflation over to wherever I live?

Just another reminder of the lost, contrived, and discordant world we live in.

Japan: the hyperfunctional society: part 1

This is intended to be a complete account of my events in an eight-day trip to Japan, which had been planned for about two years by my native-speaking Japanese teacher, was organized by an educational travel agency, and included 26 other Japanese students with varying levels of knowledge.

Names have been truncated or removed for the sake of privacy.

After many intermittent lapses in editing, I decided to just split it into two as it was getting increasingly difficult to get myself to finish the narrative, but at the same time did not want to hold back the finished parts. I am not intending to publish this for money or anything like that; please excuse my limited vocabulary and prose during some dull parts. (more…)

Domain change

After an entirely unexpected drop of the extremely popular homenet.org domain (yes, visitors from Google, “homenet.org is down”!), it became impossible to reach the website via longbyte1.homenet.org due to an unreachable path to FreeDNS. Thus, I decided to just finish moving to n00bworld.com. It took a while to figure out how to get WordPress back up and pointing to n00bworld.com, but I eventually succeeded.

What I do not know, however, is if I will succeed in finishing the account of the Japan travel. I have been putting that off for too long now. Ugh.

Internet

Without the Internet, I would never have amassed the knowledge I currently hold today. The wild success of the knowledge powertrains of Wikipedia and Google fail to cease captivating users into learning something new every day.

Yet, I loathe the Internet in numerous ways. It’s become what is virtually (literally virtually) a drug habit, and in a way worse than a drug habit because I depend on it for social needs and information. Without it, I would lose interesting, common-minded people to talk with, as well a a trove of information that I would have to buy expensive books for.

But without the development of the Internet, what would humanity be…? I suppose we would return to the days where people would actually be inclined to talk face-to-face, invite each other to their houses, play around, sit under a tree reading a book, debug programs, go places, make things. It wouldn’t necessarily be a better future, but it would certainly be a different one. If it took this long to develop the Internet (not very long, actually), imagine the other technologies we are missing out on today.

And then there is the problem of the masses. The problem lies not in the quantity itself, it’s that attempting to separate oneself from the group merely attempts to imply elitism. And you end up with some nice statistics and social experiments and a big beautiful normal model, with very dumb people on one end and very intelligent people on the other.

This wide spectrum means that conflict is abound everywhere. People challenge perspectives on Reddit, challenge facts on Wikipedia, challenge opinions on forums, challenge ideas on technical drafts and mailing lists. And on YouTube, people just have good ol’ fistfights over the dumbest of things.

On the Internet, the demographic is completely different than in human society, even if the Internet was supposed to be an extension of human society. The minority – yes, those you thought did not exist: the adamant atheists, the deniers, the libertarians, the conspiracists, the trolls – suddenly become vocal and sometimes violent. The professionalism with which the Internet was designed in mind is not to be found on any of the major streams of information. This is not ARPANET anymore. These are not scientists anymore studying how to run data over wires to see if they can send stuff between computers. These are people who believe the Internet is freedom at last. Freedom to love, freedom to hate; to hack, to disassemble, to make peace, to run campaigns, to make videos, to learn something, to play games, to make opinions, to argue, to agree, to write books, to store things, to pirate software, to watch movies, to empathize, to converse, to collaborate, or just to tell the world you really hate yourself.

Thus, I am a victim of freedom and a slave to it. My friends do not talk to me anymore. I am just left with solitude and a keyboard.

Some ideas

Concept of AI itself

I’ve glanced at many papers (knowing, of course, that I know very little of their jargon) and concluded that the recent statistical and mathematical analysis of AI has simply been overthought. Yet the theory of AI from the 70s and 80s delves to entirely conflicting perspectives of the driving force of AI in association with the morality and conscious factors of the human brain.

Think about the other organs of the body. They are certainly not simple, but after 150 years, we’ve almost figured them out, how they work mechanically and chemically. The challenge is how they work mathematically, and I believe that an attempt to determine an accurate mathematical representation of the human body would essentially lead to retracing its entire evolutionary history, up to the tiny imperfections of every person across each generation. Just as none of our hands are shaped the same, our brains most likely are structured uniquely, save for its general physical structure.

I conjecture that the brain must be built on some fundamental concept, but current researchers have not discovered it yet. It would be a beautiful conclusion, like the mass-energy equivalence that crossed Einstein’s mind when he was working in the patent office. It would be so fundamental that it would make AI ubiquitous and viable for all types of computers and architectures. And if this is not the case, then we will adapt our system architectures to the brain model to create compact, high-performing AI. The supercomputers would only have to be pulled out to simulate global-scale phenomena and creative development, such as software development, penetration testing, video production, and presidential-class political analysis and counsel.

Graph-based file system

Traditional file systems suffer from a tiny problem: their structure is inherently a top-down hierarchy, and data may only be organized using one set of categories. With the increasing complexity of operating systems, the organization of operating system files, kernel drivers, kernel libraries, user-mode shared libraries, user-mode applications, application resources, application configurations, application user data, caches, and per-user documents is becoming more and more troublesome to attain. The structure of POSIX, in the present, is “convenient enough” for current needs, but I resent the necessity to follow a standard method of organization when it introduces redundancy and the misapplication of symbolic links.

In fact, the use of symbolic links exacerbates this fundamental problem of these file systems: they work on a too low level, and they attempt to reorganize and deduplicate data, but simply increasing the complexity of the file system tree.

Instead, every node should be comprised of a metadata as well as data or a container linking to other nodes. Metadata may contain links to other metadata, or even nodes comprised solely of metadata encapsulated as regular data. A data-only node is, of course, a file, while a containerized node is a directory. The difference, however, is that in a graph-based file system, each node is uniquely identified by a number, rather than a string name (however, a string name in the metadata is to be used for human-readable listings, and a special identifier can be used as a link or locator of this node for other programs).

The interesting part about this concept is that it completely defeats the necessity of file paths. A definite, specific structure is no longer required to run programs. Imagine compiling a program, but without the hell of locating compiler libraries and headers because they have already been connected to the node where the compiler was installed.

The file system size could be virtually limitless, as one could define specifics such as bit widths and byte order upon the creation of the file system.

Even the kernel would base itself around the system, from boot. Upon mount, the root node is retrieved, linking to core system files and the rest of the operating system; package management to dodge conflicts between software wouldn’t be necessary, as everything is uniquely identified and can be flexibly organized to correctly define which applications require a specific version of a library.

In essence, it is a file system that abandons a tree structure and location by path, while encouraging references everywhere to a specific location of data.

Japanese visual novel using highly advanced AI (HAAI)

This would be an interesting first product for an aspiring AI company to show off its flagship “semi-sentient” AAI product. Players would be able to speak and interact with characters, with generated responses including synthesized voices. A basic virtual machine containing an English and Japanese switchable language core, a common sense core (simulating about ten years’ worth of real life mistakes and experiences), and an empathy core (with driver, to be able to output specific degrees of emotion) should be included in the game, which developers then parametrize and add quirks for each character, so that every character finishes with a unique AI VM image.

In fact, the technology showcased would be so successful that players would spend too much time enjoying the authentic human-like communication, getting to know the fictional characters too well, warranting the need to place a warning for players upon launching the game (like any health and safety sign) stating that “This game’s characters use highly advanced artificial intelligence. No matter how human-like these fictional characters interact, they are not human beings. Please take frequent breaks and talk to real, human people periodically, to prevent excessive attachment to the AI.”

EF review for Japan

They said they’d be posting my review “this fall,” which I guess implies that they screen and censor each review for any personal information. Also, I had to write the review in a tiny textbox in Internet Exploder because it failed to work in any other browser, and when I go to the “write review” menu, it’s as if I had never submitted a review in the first place. What a horrible web infrastructure their website has.

I’ll post my full account of my experience in Japan in a few days, but for now, please enjoy my scathing three-star review of the EF tour. The country is great, but the tour was certainly not.


One cannot review the culture and aspects of a country; it is not something that can be placed stars on. You can choose any country that EF offers tours for and expect a great experience simply being present in a new environment with classmates. This part does not change with any educational tour or travel agency.

Thus, I will focus on primarily the tour itself, which is the part that EF specifically offers in competition with other travel agencies. I will cover praise and criticism by points rather than in chronological order.

Praise

  • There were no outstanding needs to contact EF. The tour and flights were all booked correctly.
  • Good density of places to visit. The tour’s itinerary was loaded with many points of interest, yet there was no feeling of exhaustion. I took around 900 photos by the conclusion of the tour.
  • Excellent cost-effectiveness. It’s difficult to beat EF in terms of pricing, especially in how they provide a fairly solid estimate with one big price tag.
  • Tour guide knew his history very well, even if he was unable to explain it fluently. You could ask him about the history of a specific point of interest, and he could tell you very precisely its roots, whether they be from the Meiji, Edo, or Tokugawa period.
  • Every dinner was authentic Japanese food. No exceptions.

Criticism

  • Tour guide had poor command of English and was extremely difficult to understand. In Japan, “Engrish” is very common, and it’s admittedly very difficult to find someone who can speak English fluently and correctly. However, this really reveals that you get what you pay for: if you want a cheapo tour, you will get a cheapo tour guide who might not be all you wanted. I will reiterate this: he was not a captivating tour guide, and it took great effort to try to absorb the information he was disseminating.
  • Little time spent in the actual points of interest, possibly due to an inefficient use of the tour bus. In many cases, it’s cheaper and faster to use the subway to get to places, although I concede that the tour bus is useful in times where one wants to see the area that leads up to an important or unfamiliar destination. Still, on the worst day, we were on the bus for a cumulative three hours, yet we only had around forty to fifty minutes per point of interest. No wonder I took so many pictures, as the tour felt rushed and didn’t give me time to take in the view before we had to get back in the bus to go somewhere else.
  • Miscommunication with EF during the tour. We were promised two people to a room on the first hotel, but instead were assigned three to a room. The arrangement wasn’t that bad after all, but it still contradicted the claims made in the travel meetings. What’s more, we were informed something about an EF group from Las Vegas that would be merging with our group, but this also never happened (they toured separately from us, but we encountered them occasionally).
  • Reversed tour. There is, in fact, fine print that EF is allowed to do this if reversing the tour would save money, but it’s still unpleasant and detracting from the intended experience. My group leader, who is a native speaker I know very well, told me before the tour that she was irritated from the reversal, since it’s much better to start from Tokyo, the modern part of Japan, and work one’s way southward to the more traditional Kyoto.
  • The last day of the tour was poorly planned by EF, so our group leader had to change the itinerary of that day (well before the tour, obviously) to some significantly better plans. Originally, the whole day would have been basically hanging around in Ueno Park, but she changed that to going to Tokyo Skytree, Hongwanji Temple, the Tsukiji fish market (which is moving elsewhere very soon), and the Edo-Tokyo Museum. We had to foot the bill for the attractions of this day, including Skytree, the museum, and 100 grams of toro (fatty tuna).
  • Poor distinction between what is already paid by EF and what we would have to pay for in addition to our tour. For instance, some of our subway tickets were already bought ahead of time by our tour director, but some we had to pay for with our money, which doesn’t really make sense because all of the transportation was supposed to have been covered by the tour cost.
  • Our group leader (and her husband and kids) ended up doing most of the work, especially rounding up everyone and ensuring that they are all present.
  • Less time than you would expect to spend your own money. After all, they want the tour to be educational, rather than just general tourism. But the interesting part was that we had to vote to go back to Akihabara, because we were only given two hours (including lunch!) to buy the games and figurines we had always wanted to buy from Japan. Even after the small petition, the final decision was to make Akihabara and Harajuku mutually exclusive, which means that you could only choose to go to one or the other. I decided to just go to Harajuku purely because I’d feel guilty if I didn’t stick to the original plan, but I regret the decision in retrospect because I ended up buying absolutely nothing there. (They just sell Western clothes in Harajuku, so you’re a Westerner buying used Western clothes in a non-Western country.)

There are probably quite a few number of points I am missing here, but this should be sufficient to give you an idea of the specifics of the tour that are not covered in the generic “it was really great and I had a lot of fun!!” reviews.

As a recent high school graduate, I’ll be looking forward to my next trip to Japan, but this time with another travel agency that provides more transparency in terms of itinerary and fees. I’d also be predisposed to spending more money to get a longer and better quality tour that actually gives me time to enjoy viewing the temples and monuments, rather than frantically taking pictures to appreciate later.

On the regulation of AI

It seems so futile the attempt of trying to regulate AI, something that doesn’t even truly exist yet. We don’t have AI we can call sentient yet. The rationale is well-founded, but what we’re really trying to say is, “We know we can make something better than us in every way imaginable, so we’ll limit its proliferation so that humans are superseded not by AI, but by our own demise.”

So after the many times this has been done ad nauseum, it looks like the “Future of Life Institute” (as if they were gods who possibly have any power to control the ultimate fate of humanity!) have disseminated the Asilomar AI Principles (Asilomar is just the place the meeting was held. Apparently, these astute individuals really like the beach, as they had gone to Puerto Rico in their previous conference two years prior). They have garnered thousands of signatures from prestigious, accomplished AI researchers.

The Asilomar Principles are an outline of 23 issues/concepts that should be adhered to in the creation and continuation of AI. I’m going to take it apart, bit by bit.

 

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

What is “undirected intelligence”? Does this mean we can’t throw AI at a big hunk of data and let it form its own conclusions? Meaning, we can’t feed AI a million journals and let it put two and two together to write a literature review for us. And we can’t use AI to troll for us on 4chan.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

They throw this word “beneficial” around but I don’t know what exactly “beneficial” means. Cars are beneficial, but they can also be used to kill people.

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

You get programmers to stop writing lazy, dirty, unoptimized code that disregards basic security and design principles. We can’t even make an “unhackable” website; how could we possibly make an AI that is “unhackable” at the core?

  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?

You can’t. Robots replace human capital. The only job security that will be left is programming the robots themselves, and even AI will take care of patching their own operating systems eventually. Purpose – well, we’ve always had a problem with that. Maybe you can add some purpose in your life with prayer – or is that not “productive” enough for you?

  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?

Legal systems can’t even cope with today’s technology. Go look at the DMCA: it was made decades ago, back in the age of dial-up, and is in grave need of replacement to make the system fairer. You can post videos within seconds today that most likely contain some sort of copyrighted content on it.

  • What set of values should AI be aligned with, and what legal and ethical status should it have?

Most likely, they will be whatever morals the AI’s developers personally adhere to. Like father, like son.

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

Like lobbying? I don’t think I’ve ever seen “constructive and healthy exchange” made on the Congressional floor. Dirty money always finds its way into the system, like a cockroach infestation.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

Doesn’t this apply to pretty much everything research-related? Oh, that’s why it’s titled “research culture.” I’ll give them this one for reminding the reader about common sense.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

I almost interpreted this as “AI should avoid being racist.” Anyhow, this is literally capitalism: competing teams will cut corners and do whatever they can to lead in the market. This is probably the liberal thinking of the researchers leaking into the paper: they are suggesting that capitalism is broken and that we need to be like post-industrial European countries, with their semi-socialism. In a way, they’re right: capitalism is broken – economic analysis fails to factor in long-term environmental impacts of increases in aggregate supply and demand.

Ethics and Values

Why do they sidestep around the word “morals”? Does this word not exist anymore, or is it somehow confined to something that is inherently missing from the researchers?

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

“Safety first.” Okay…

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

You want a black box for your AI? Do you want to give them a room where you can interrogate them for info? Look, we can’t even extract alibis from human people, so how can we peer into AI brains and get anything intelligible out of them?

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

This is not a place where AI should delve into, anyway. We will not trust AI to make important decisions all by themselves, not in a hundred years.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

Meaning you want to be able to sue individual engineers, rather than the company as a whole, for causing faults in an AI. Then what’s the point of a company if they don’t protect their employees from liability?!

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

What if AI finds itself to align better to values than humans? What if the company that made an AI got corrupt and said to themselves, “This AI is too truthful, so we’ll shut it down for not aligning to our values.”

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

Debatable topics like abortion come to mind. Where’s the compatibility in that?

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

Again, we don’t even have control over this right now, so why would we have control over it in the future with AI?

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

And it probably will “curtail” our liberty. Google will do it for the money, just watch.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

What a cliche phrase… ohhh. It’s as if I didn’t include this exact phrase in my MIT application, not considering how gullible I am to not realize that literally everyone else had the exact same intentions when they applied to MIT too.

When Adobe sells Photoshop, is it empowering people to become graphic artists? Is it empowering everyone, really, with that $600 price tag? Likewise, AI is just software, and like any software, it has a price tag, and the software can and will be put for sale. Maybe in 80 years, I’ll find myself trying to justify to a sentient AI why I pirated it.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

Reminds me of the imperialist “Greater East Asia Co-Prosperity Sphere.” Did Japan really want to share the money with China? No, of course not. Likewise, it’s hard to trust large companies that appear to be doing what is morally just.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

I can’t tell Excel to temporarily stop turning my strings into numbers, as it’s not exactly easy to command an AI to leave a specific task to be done manually by the human. What if it’s in a raw binary format intended to be read by machines only? Not very easy for the human to collaborate, is it now?

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

I think at some point, the sentient AI will have different, more “optimal” ideas it wants to implement, or shut down entirely.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Tell that to our governments, not us. Oops, too late, the military has already made such weapons…

Longer-term Issues

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

“Assumptions” including this entire paper. You assume you can control the upper limit of AI, but you really can’t.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

You don’t say.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

Because such efforts show that human labor is going to be deprecated in favor of stronger, faster robotic work…?

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Every person will have their own “superintelligence.” There will not be one worldly superintelligence until the very end of human civilization, which ought to be beyond the scope of this document, since we obviously can’t predict the future so far.

 

You can make pretty documents outlining the ideals of AI, but you must be realistic with your goals and what people will do with AI. Imposing further rules will bring AI to a grinding halt, as we quickly discover the boundaries that we have placed upon ourselves. Just let things happen, as humans learn best from mistakes.

On Aseprite

Once upon a time, my uncle wanted to give me Photoshop CS5 4 as a present for my tenth birthday. However, as he did not bring the physical box along with him when he visited (he was a graphic artist at the time), he ended up installing a cracked copy when I wasn’t on the computer. I kept whining that it was illegal, that he couldn’t do that and now there were going to be viruses on my computer, but he explained calmly that there was no other way since he didn’t have the CD with him. So I said okay, vowing I’d uninstall it later, but after a while of using it, it kind of stuck, and no malware appeared (to this day, it is to my surprise how he managed to find a clean copy so quickly). The only condition, as he stated, was that I could not use Photoshop for commercial use – basically, you can’t sell anything you make with this cracked Photoshop. Fair enough.

Even so, I steered away from Photoshop, as anything I made with it felt tainted with piracy. Later, I’d use it a little more, but I placed little investment in learning the software, as I had made no monetary investment in the software at all. I used Paint.NET instead, and despite its shortcomings (no vector mode, no text layers, half-decent magic wand, no magnetic lasso), the shortcuts felt familiar and the workflow remained generally the same as that of Photoshop. People also recommended Gimp as “the only good free alternative to Photoshop”, but I didn’t like Gimp because literally every shortcut is different, and the workflow is likewise totally different. The truth was that Photoshop was Photoshop, and Gimp was Gimp.

Yet I sought to do pixel art. This was supposed to be an easy endeavor, but Paint.NET was an annoying tool. Eventually, I found David Capello’s Aseprite and had no trouble adapting to the software, as it was designed for pixel art.

I had few complaints, but they had to be dismissed; after all, this was software still in the making. Only relatively recently was symmetry added, and the software was made more usable. I also liked its $0 price tag – if you were competent enough to compile the binaries yourself. And because the software was GPL, you could even distribute the binaries for free, even though Capello charged money for them. Capello was happy, and the FOSS community was happy. Some even tried setting up Aseprite as an Ubuntu package in universe, although it generally wasn’t up-to-date, due to stringent updating guidelines.

Until the day Capello decided to revoke the GPLv2. I knew the day was coming and wasn’t surprised when the news came. Plop, the old GPLv2 came off and subsequent versions were replaced with a license of his making, forbidding distribution of binaries and further reproduction. The incentive of making pull requests to add features was gone – after all, you were really just helping someone out there earn more money, as opposed to contributing to a genuine open-source project. Of the 114 closed pull requests, only 7 are from this year (as of the time of writing).

In fact, the entire prospect of Aseprite continuing as an open-source project collapsed, for Capello had bait-and-switched the FOSS community to support his image editor because it was “open source,” without informing clearly of his ulterior motives to drop the license in the future. Licensing as GPLv2 was, after all no mistake as opposed to choosing GPLv3 – perhaps this had something to do with being compatible with Allegro’s license, or more permissibility for other contributors? No. This had to do with a clause that GPLv3 had, but GPLv2 did not: the irrevocable, viral release of one’s code to the open-source realm. Without this important clause, and because he was the owner of the code, Capello could simply rip off the old license and slap on a more proprietary one, which is exactly what he did.

The argument in defense of Capello was, “Well, it’s his software, he can do whatever he want.” After all, he was charging for the program, anyway. But the counterargument is that the GPL is intended by the Free Software Foundation to promote the open-source movement, not to deceive users into thinking your for-profit project upholds the ideals of free and open-source software, especially that open part: free as in freedom, not just free as in beer. Now there is not only a price tag on the product, but also a ban on distributing binaries, thanks to this incredible decision to make more money.

Yes, I know someone has to keep the lights on. You can do that in many ways, but one of them is not by turning your “open-source” project into downright proprietary software. Now, people demand more and contribute less – why should they pay when there are less results and less features being implemented? The cycle of development decelerates, and putting money into Aseprite is now a matter of business rather than a matter of gratitude.

I don’t remember how to compile Aseprite at this point. I remember it being mostly a pain in the butt having to compile Skia, but that’s about it. Thus, I have no more interest in using Aseprite.

Entering college, Adobe is offering absolutely no discounts on its products. It’s almost as if they want kids like me to go ahead and pirate Photoshop again. There is no way I am going to afford a single program with the price of an entire computer. Yes, I know, Aseprite is obviously cheaper than Photoshop, but why should I buy a pixel editing tool when I can get something that can do all kinds of image manipulation?

A slap to the face goes to the general direction of Adobe and David Capello. Good job for keeping the image editing market in the status quo.

On Arduino

This is not intended to be a full explanation of Arduino, but rather an address of some misconceptions of what Arduino is and what it’s supposed to be. I am by no means an expert and I use an Elegoo Uno (which is an Arduino knockoff), because I am a cheap sore loser.

Arduino is intended to be an accessible, ready-to-use microcontroller kit for prototyping. For cost reasons, the designers decided to use an Atmel AVR/ATmega8/168/328(p).

Now that we know this, let’s get into the misconceptions.

“Arduino is Arduino”

Meaning that Arduino is its own thing and you can’t use anything to replace it. No. Arduino is simply a PCB containing:

  • the microcontroller you want to use
  • an accessible way to get to the pins supported by the microcontroller
  • an external clock crystal you can swap out
  • a couple of fuses so you don’t burn your toy out from playing with the leads
  • a USB controller for easy programming (which actually might turn out to be more powerful than your target microcontroller)
  • USB/12V ports
  • Firmware that facilitates easy programming for the target microcontroller

You could rig your own programmer for your target microcontroller, solder everything yourself, but you’re missing the point. It’s for convenience. Any manufacturer can make “Arduino”-like kits and they’d work great anyway.

Arduino IDE is the only way to program the Arduino

Wrong again. This is actually the most rampant misconception out there. Actually, Arduino IDE is a horrible “IDE” if you can even call it that. It is quite literally a Java application with the Processing user interface (because Arduino was taken from Wiring, which in turn was based off Processing). When you compile something, it just executes a preprocessing script that takes your code and slaps on some standard headers, then it invokes the prepackaged gcc that actually does the heavy lifting. When you upload something, it invokes avrdude with the COM port you chose in the context menu and wow, magic!

If you want, you can make your own Makefile or CMake configuration that invokes all of this. I actually recommend this choice, because then you are free to use any text editor of your choice.

Arduino uses its own programming language

“Wow it has classes, it must be Java!” “Hmm, it could be Processing.” Nope, it’s C++. The only thing it doesn’t have are exceptions, and that’s just because the AVR wasn’t designed with any exception handling capabilities at all. So, every time you read an “Arduino Programming Language” tutorial, you’re actually being deceived into writing ugly C++ code. Take a small breath, and realize you’ve been passing your big objects by value instead of by address all along. Use pointers.

ATmega328 is like any other processor, but smaller

Except it’s not. It’s an 8-bit RISC processor with a tiny instruction set with somewhere around 16 MHz of clock speed, which is marginally better than the clock speed on a Zilog Z80. Even with a very powerful language at your disposal, you still have to optimize code.

Anyway, I’m tired and I’m out of ideas for what to write next.

Tape drive VCR, part 1

One day I had this amazing idea! I was looking through the tape drives for sale, and as usual they were over $1,200 for LTO-5 or LTO-6 tape drives, which are the only generations that can match the current hard drive market. There are so many unused VHS tapes, and with the untapped potential of analog storage media, you could store digital media in these cassettes! After all, they’re just tapes! You could make… a tape drive using a VCR!

All right, I think you’ve got the sarcasm and naivety of my thought process. I mean, if you think about it only for a few seconds, it’s just silly humor. But when it remains within your mind for days on end, wondering whether or not it truly is possible, you feel as if the only way to find out is to try it yourself.

Let’s take a closer look at this incredulous idea. The first and only popular stab at this was ArVid. It was basically this Russian ISA card that ran composite video to your VCR, and that was it. It could store data at a speed up to 325 kbps, and with some simple math we come up to almost exactly 2 GB on an E-180. And you know what, a lot of people said “yeah, I guess that’s reasonable,” and they stopped there.

But there are some huge limitations to ArVid, that could have allowed it to increase in data retention. First, it has only two symbols: luma on and off (!!!), which already makes the storage incredibly inefficient! It uses some Hamming for ECC but that’s about it, according to Wikipedia. Now, I’m no expert here on signal processing (just started seriously reading about this an hour or two ago), but with QPSK or QAM, we can make it significantly more efficient. So, screw ArVid.

We also don’t need an additional card to bring the analog data over to the VCR. We can use the sound “card” that is already built into the motherboard to produce the analog signals we need, and at an acceptable sample rate too (while “sample rate” doesn’t exist when we’re talking about pure analog signals, we do still need to convert digital signals over to analog, but the sound card can only support up to 96 kHz or 192 kHz, thereby limiting our symbol rate). A separate sound card might still be convenient, however, given that this method may hinder a user’s ability to use sound at all (or the user may accidentally trigger a system sound that interferes with the data throughput).

So, how much data exactly do we think a VHS can carry? I think that in a perfect world with an ideal design, it will be somewhere between 80-160 GB. However, formal calculations based on the modulation to be used will be required in order to prove this, so I will not talk much about it.

Instead, I’ll discuss the practicality of this design. Yes, you could hack a remote control and stick it to the VCR, and that would be the interface for communication. Haha! But to be honest, I’m not really willing to destroy my VCR and remote just to figure out how well this is going to work. The solution, then, becomes fairly clear: just have the user be instructed on what to do. The user would note where a datum is stored and all he would do is just move the head right before it and hit “read” right before the data is reached. The signal would be aligned and processed perfectly.

Alternatively, we can tell the user to “initialize” the VHS by having the software sprinkle position markers across the tape. They don’t have to be exact placements, but they give the software an idea of what spaces have been consumed and where to go based on the last read position marker, assuming that the software is tracking where data has been stored in some sort of external master file table. This can then be turned into simple “rewind for about 20 seconds” commands given to the user. The user would play back a little bit, which would allow the software to give feedback on how close they are to to the data (and if actual data is being played back, then this should be detected and the user should be instructed to go back to the beginning of the data).

I’ve been taking a look at GNU Radio and I think this should give me a fair estimation of what modulation method(s) to use, and how much noise is expected. We’re dealing with VHS, which is great, because the expected noise is extremely low.