Category: On Anything

On the regulation of AI

It seems so futile the attempt of trying to regulate AI, something that doesn’t even truly exist yet. We don’t have AI we can call sentient yet. The rationale is well-founded, but what we’re really trying to say is, “We know we can make something better than us in every way imaginable, so we’ll limit its proliferation so that humans are superseded not by AI, but by our own demise.”

So after the many times this has been done ad nauseum, it looks like the “Future of Life Institute” (as if they were gods who possibly have any power to control the ultimate fate of humanity!) have disseminated the Asilomar AI Principles (Asilomar is just the place the meeting was held. Apparently, these astute individuals really like the beach, as they had gone to Puerto Rico in their previous conference two years prior). They have garnered thousands of signatures from prestigious, accomplished AI researchers.

The Asilomar Principles are an outline of 23 issues/concepts that should be adhered to in the creation and continuation of AI. I’m going to take it apart, bit by bit.

 

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

What is “undirected intelligence”? Does this mean we can’t throw AI at a big hunk of data and let it form its own conclusions? Meaning, we can’t feed AI a million journals and let it put two and two together to write a literature review for us. And we can’t use AI to troll for us on 4chan.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

They throw this word “beneficial” around but I don’t know what exactly “beneficial” means. Cars are beneficial, but they can also be used to kill people.

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

You get programmers to stop writing lazy, dirty, unoptimized code that disregards basic security and design principles. We can’t even make an “unhackable” website; how could we possibly make an AI that is “unhackable” at the core?

  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?

You can’t. Robots replace human capital. The only job security that will be left is programming the robots themselves, and even AI will take care of patching their own operating systems eventually. Purpose – well, we’ve always had a problem with that. Maybe you can add some purpose in your life with prayer – or is that not “productive” enough for you?

  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?

Legal systems can’t even cope with today’s technology. Go look at the DMCA: it was made decades ago, back in the age of dial-up, and is in grave need of replacement to make the system fairer. You can post videos within seconds today that most likely contain some sort of copyrighted content on it.

  • What set of values should AI be aligned with, and what legal and ethical status should it have?

Most likely, they will be whatever morals the AI’s developers personally adhere to. Like father, like son.

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

Like lobbying? I don’t think I’ve ever seen “constructive and healthy exchange” made on the Congressional floor. Dirty money always finds its way into the system, like a cockroach infestation.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

Doesn’t this apply to pretty much everything research-related? Oh, that’s why it’s titled “research culture.” I’ll give them this one for reminding the reader about common sense.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

I almost interpreted this as “AI should avoid being racist.” Anyhow, this is literally capitalism: competing teams will cut corners and do whatever they can to lead in the market. This is probably the liberal thinking of the researchers leaking into the paper: they are suggesting that capitalism is broken and that we need to be like post-industrial European countries, with their semi-socialism. In a way, they’re right: capitalism is broken – economic analysis fails to factor in long-term environmental impacts of increases in aggregate supply and demand.

Ethics and Values

Why do they sidestep around the word “morals”? Does this word not exist anymore, or is it somehow confined to something that is inherently missing from the researchers?

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

“Safety first.” Okay…

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

You want a black box for your AI? Do you want to give them a room where you can interrogate them for info? Look, we can’t even extract alibis from human people, so how can we peer into AI brains and get anything intelligible out of them?

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

This is not a place where AI should delve into, anyway. We will not trust AI to make important decisions all by themselves, not in a hundred years.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

Meaning you want to be able to sue individual engineers, rather than the company as a whole, for causing faults in an AI. Then what’s the point of a company if they don’t protect their employees from liability?!

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

What if AI finds itself to align better to values than humans? What if the company that made an AI got corrupt and said to themselves, “This AI is too truthful, so we’ll shut it down for not aligning to our values.”

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

Debatable topics like abortion come to mind. Where’s the compatibility in that?

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

Again, we don’t even have control over this right now, so why would we have control over it in the future with AI?

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

And it probably will “curtail” our liberty. Google will do it for the money, just watch.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

What a cliche phrase… ohhh. It’s as if I didn’t include this exact phrase in my MIT application, not considering how gullible I am to not realize that literally everyone else had the exact same intentions when they applied to MIT too.

When Adobe sells Photoshop, is it empowering people to become graphic artists? Is it empowering everyone, really, with that $600 price tag? Likewise, AI is just software, and like any software, it has a price tag, and the software can and will be put for sale. Maybe in 80 years, I’ll find myself trying to justify to a sentient AI why I pirated it.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

Reminds me of the imperialist “Greater East Asia Co-Prosperity Sphere.” Did Japan really want to share the money with China? No, of course not. Likewise, it’s hard to trust large companies that appear to be doing what is morally just.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

I can’t tell Excel to temporarily stop turning my strings into numbers, as it’s not exactly easy to command an AI to leave a specific task to be done manually by the human. What if it’s in a raw binary format intended to be read by machines only? Not very easy for the human to collaborate, is it now?

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

I think at some point, the sentient AI will have different, more “optimal” ideas it wants to implement, or shut down entirely.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Tell that to our governments, not us. Oops, too late, the military has already made such weapons…

Longer-term Issues

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

“Assumptions” including this entire paper. You assume you can control the upper limit of AI, but you really can’t.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

You don’t say.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

Because such efforts show that human labor is going to be deprecated in favor of stronger, faster robotic work…?

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Every person will have their own “superintelligence.” There will not be one worldly superintelligence until the very end of human civilization, which ought to be beyond the scope of this document, since we obviously can’t predict the future so far.

 

You can make pretty documents outlining the ideals of AI, but you must be realistic with your goals and what people will do with AI. Imposing further rules will bring AI to a grinding halt, as we quickly discover the boundaries that we have placed upon ourselves. Just let things happen, as humans learn best from mistakes.

On Aseprite

Once upon a time, my uncle wanted to give me Photoshop CS5 4 as a present for my tenth birthday. However, as he did not bring the physical box along with him when he visited (he was a graphic artist at the time), he ended up installing a cracked copy when I wasn’t on the computer. I kept whining that it was illegal, that he couldn’t do that and now there were going to be viruses on my computer, but he explained calmly that there was no other way since he didn’t have the CD with him. So I said okay, vowing I’d uninstall it later, but after a while of using it, it kind of stuck, and no malware appeared (to this day, it is to my surprise how he managed to find a clean copy so quickly). The only condition, as he stated, was that I could not use Photoshop for commercial use – basically, you can’t sell anything you make with this cracked Photoshop. Fair enough.

Even so, I steered away from Photoshop, as anything I made with it felt tainted with piracy. Later, I’d use it a little more, but I placed little investment in learning the software, as I had made no monetary investment in the software at all. I used Paint.NET instead, and despite its shortcomings (no vector mode, no text layers, half-decent magic wand, no magnetic lasso), the shortcuts felt familiar and the workflow remained generally the same as that of Photoshop. People also recommended Gimp as “the only good free alternative to Photoshop”, but I didn’t like Gimp because literally every shortcut is different, and the workflow is likewise totally different. The truth was that Photoshop was Photoshop, and Gimp was Gimp.

Yet I sought to do pixel art. This was supposed to be an easy endeavor, but Paint.NET was an annoying tool. Eventually, I found David Capello’s Aseprite and had no trouble adapting to the software, as it was designed for pixel art.

I had few complaints, but they had to be dismissed; after all, this was software still in the making. Only relatively recently was symmetry added, and the software was made more usable. I also liked its $0 price tag – if you were competent enough to compile the binaries yourself. And because the software was GPL, you could even distribute the binaries for free, even though Capello charged money for them. Capello was happy, and the FOSS community was happy. Some even tried setting up Aseprite as an Ubuntu package in universe, although it generally wasn’t up-to-date, due to stringent updating guidelines.

Until the day Capello decided to revoke the GPLv2. I knew the day was coming and wasn’t surprised when the news came. Plop, the old GPLv2 came off and subsequent versions were replaced with a license of his making, forbidding distribution of binaries and further reproduction. The incentive of making pull requests to add features was gone – after all, you were really just helping someone out there earn more money, as opposed to contributing to a genuine open-source project. Of the 114 closed pull requests, only 7 are from this year (as of the time of writing).

In fact, the entire prospect of Aseprite continuing as an open-source project collapsed, for Capello had bait-and-switched the FOSS community to support his image editor because it was “open source,” without informing clearly of his ulterior motives to drop the license in the future. Licensing as GPLv2 was, after all no mistake as opposed to choosing GPLv3 – perhaps this had something to do with being compatible with Allegro’s license, or more permissibility for other contributors? No. This had to do with a clause that GPLv3 had, but GPLv2 did not: the irrevocable, viral release of one’s code to the open-source realm. Without this important clause, and because he was the owner of the code, Capello could simply rip off the old license and slap on a more proprietary one, which is exactly what he did.

The argument in defense of Capello was, “Well, it’s his software, he can do whatever he want.” After all, he was charging for the program, anyway. But the counterargument is that the GPL is intended by the Free Software Foundation to promote the open-source movement, not to deceive users into thinking your for-profit project upholds the ideals of free and open-source software, especially that open part: free as in freedom, not just free as in beer. Now there is not only a price tag on the product, but also a ban on distributing binaries, thanks to this incredible decision to make more money.

Yes, I know someone has to keep the lights on. You can do that in many ways, but one of them is not by turning your “open-source” project into downright proprietary software. Now, people demand more and contribute less – why should they pay when there are less results and less features being implemented? The cycle of development decelerates, and putting money into Aseprite is now a matter of business rather than a matter of gratitude.

I don’t remember how to compile Aseprite at this point. I remember it being mostly a pain in the butt having to compile Skia, but that’s about it. Thus, I have no more interest in using Aseprite.

Entering college, Adobe is offering absolutely no discounts on its products. It’s almost as if they want kids like me to go ahead and pirate Photoshop again. There is no way I am going to afford a single program with the price of an entire computer. Yes, I know, Aseprite is obviously cheaper than Photoshop, but why should I buy a pixel editing tool when I can get something that can do all kinds of image manipulation?

A slap to the face goes to the general direction of Adobe and David Capello. Good job for keeping the image editing market in the status quo.

On Arduino

This is not intended to be a full explanation of Arduino, but rather an address of some misconceptions of what Arduino is and what it’s supposed to be. I am by no means an expert and I use an Elegoo Uno (which is an Arduino knockoff), because I am a cheap sore loser.

Arduino is intended to be an accessible, ready-to-use microcontroller kit for prototyping. For cost reasons, the designers decided to use an Atmel AVR/ATmega8/168/328(p).

Now that we know this, let’s get into the misconceptions.

“Arduino is Arduino”

Meaning that Arduino is its own thing and you can’t use anything to replace it. No. Arduino is simply a PCB containing:

  • the microcontroller you want to use
  • an accessible way to get to the pins supported by the microcontroller
  • an external clock crystal you can swap out
  • a couple of fuses so you don’t burn your toy out from playing with the leads
  • a USB controller for easy programming (which actually might turn out to be more powerful than your target microcontroller)
  • USB/12V ports
  • Firmware that facilitates easy programming for the target microcontroller

You could rig your own programmer for your target microcontroller, solder everything yourself, but you’re missing the point. It’s for convenience. Any manufacturer can make “Arduino”-like kits and they’d work great anyway.

Arduino IDE is the only way to program the Arduino

Wrong again. This is actually the most rampant misconception out there. Actually, Arduino IDE is a horrible “IDE” if you can even call it that. It is quite literally a Java application with the Processing user interface (because Arduino was taken from Wiring, which in turn was based off Processing). When you compile something, it just executes a preprocessing script that takes your code and slaps on some standard headers, then it invokes the prepackaged gcc that actually does the heavy lifting. When you upload something, it invokes avrdude with the COM port you chose in the context menu and wow, magic!

If you want, you can make your own Makefile or CMake configuration that invokes all of this. I actually recommend this choice, because then you are free to use any text editor of your choice.

Arduino uses its own programming language

“Wow it has classes, it must be Java!” “Hmm, it could be Processing.” Nope, it’s C++. The only thing it doesn’t have are exceptions, and that’s just because the AVR wasn’t designed with any exception handling capabilities at all. So, every time you read an “Arduino Programming Language” tutorial, you’re actually being deceived into writing ugly C++ code. Take a small breath, and realize you’ve been passing your big objects by value instead of by address all along. Use pointers.

ATmega328 is like any other processor, but smaller

Except it’s not. It’s an 8-bit RISC processor with a tiny instruction set with somewhere around 16 MHz of clock speed, which is marginally better than the clock speed on a Zilog Z80. Even with a very powerful language at your disposal, you still have to optimize code.

Anyway, I’m tired and I’m out of ideas for what to write next.

On rescue

A few weeks ago, I watched, live, a kid climbing the Trump Tower with a few suction cups and shortly after getting nabbed by the police that cornered him. One of the police men was just hanging the cord to pull him up in case he ever wanted to be “rescued.” Obviously, the police made it look like they were “rescuing” the kid, not nabbing and strangling him until he was unconscious.

But I had a daydream: suppose my school had a structural failure and collapsed (God forbid) and I found a way out. The police, firemen, and paramedics are all waiting outside the hole I would be escaping from. Right when I find the hole and they come within my line of sight, they immediately take me and put an oxygen mask on me, maybe throw a shock blanket on me. Gasping for air, I try to tell them, “I know where the rest are,” stating my intentions to sacrifice myself to make a heroic effort and rescue others trapped inside.

Back then, your request would be accepted. The firemen would helplessly watch as you look outside for a second and scamper back into the rubble, perhaps either returning with a few bodies or becoming one of those bodies yourself. After you rescue the bodies you can and throw yourself into the ground, everyone would surround you and praise your heroic efforts as you are placed in the ambulance and taken to the hospital, in case you were stabbed by a piece of rubble or your lungs are filled with the fine particulate matter. After a few days, you would be globally recognized as hero and/or a saint, depending on whether or not you died in your mission.

But times are different. The same firemen will not honor your heroism. They will say, “No, the structure is unstable. We will do the best we can.” or, “We cannot afford to lose another person.” or, “If you die in there, your parents might sue us.” Shaking and fighting, you are put in the ambulance anyway and sent off as yet another victim.

A few days later, you would hear news of the tragedy, and, of course, the girl, the hero, who rescued five bodies. She gets all the media attention; all the reputation; the visit to the White House. You tell the media you wanted to rescue people too, but the firemen did not allow you under any circumstances. The media ignores you in favor of reporting the trendy headlines celebrating this newfound hero.

Whose story is better: hers or yours? Who should be honored more: the hero who wanted to be, but was forcefully restrained; or the hero who did not intend to (or perhaps she did), and became one?

And the psychologist will come and look at your case file. You will cry, “I wanted to save them! I wanted to save them but I couldn’t!” She will apathetically write down, “Survivor guilt, possible PTSD.” And she will say, “There is nothing you can do.” You will ask for retribution. You will want to sue them for gross negligence, but they will argue they were doing the exact opposite. But in the end, there is no answer. You must somehow continue your life, knowing that the firemen let many people die only to save you.

Then who is more important, the people entrusted with saving lives but are not heroes; or the people who want to be heroes but do not have this single responsibility?

This is the social dilemma. Is honor and symbolism something of the past? If I had the opportunity to be a hero, I would be one. Honor is something passed down across generations until it fades away. But nowadays, it seems people do not care about their ancestry, their past. It is all part of the American drama of divorce, lawsuits, obesity, drugs, irresponsibility, and a chronic disjunction between parents and their descendants.

Can the new generation’s response to the newer generation possibly improve?

On virtual reality

Many people view virtual reality (VR) – and let’s point out the big elephant in the room, HTC Vive – as “the future that is now.” Then they hype hype hype and buy it. Then they complain that there aren’t enough VR games, that they’re all bad, etc.

But instead, consumers need to look at it from this standpoint: Where were video games at ten years ago? Twenty? Ten years ago we were doing these low-poly games and the best console on the market was PlayStation 2. Increasingly developers understood the architecture more and more and were able to hyperoptimize their games to exploit what the hardware could really do, which still can’t be emulated at full speed today without hacks along the way. Now it’s 2016 and we are seeing Direct3D 12 and photorealistic graphics. No, not FSX “photorealistic,” but as in faces rendered realtime and so humanlike that you couldn’t even tell whether they were fake or real. And in the 90s and 80s, we didn’t even have 3D graphics good enough for gaming, with the exception of some rising consoles such as the N64. And even they were extremely limited in capability.

Thus, I implore consumers not to look at problems in the “now” but in the future. In twenty years we were able to achieve photorealism for realtime applications such as gaming. And now that the whole issue of graphics has been resolved (since any graphics card made since last year is able to render basically anything you wanted on it), we have new issues related to VR, such as wield variably shaped items, move around in a physically confined area (less than 9 m2!), sense their environment beyond sight, interact with the virtual environment? Currently these problems have yet to be solved. But if we literally invented (and perfected) 3D graphics in 30 years – could we not invent and perfect virtual reality in the same amount of time?

And the whole idea of how these problems will be answered frightens me because the solutions might become intrusive. What if in 2030 we simply became accustomed to human alteration? What if 2040 the first human beings entered the long-awaited “dream pods” that would cede their consciousness to the hands of a computer? What if in 2055 we decided to just transfer our entire beings into solid-state drives? What if in 2085 the human race just disappeared from the face of the planet? Soon, virtual reality will become the only reality. And it’s turtles all the way down.

So don’t complain because one day, you’ll miss playing on a monitor.

On computer terminology

I see in many books certain attempts to ease the apparent pains of using computer terminology.

For example:

With the help of Tim Berners-Lee, the Internet became popularized with the creation of the World Wide Web.

This simple statement becomes this convoluted paragraph:

With the assistance of Tim Berners-Lee, a computer technology was developed that allowed computers to communicate each other through what became known as the World Wide Web, which people could connect to through new software such as America Online and CompuServe that came in floppy diskettes. Thus came the existence of the Internet.

Authors continue to be extremely cautious in introducing computer terminology in their writings. But the truth is, who doesn’t know what the Internet is these days? Who doesn’t know what software is? And when authors do use the terminology, they often surround it with these metaphors so as to try to compare it to tasks once done by hand. “The Internet, like a pair of telephone wires, …” “With the advent of the microprocessor, computers once the size of rooms became smaller than the ‘a’ in this book…” This is the virtual world we’re talking about here. There is no substitute for these things.

No. Heck, no. If you’re going to include words like “axle” and “spigot” in a book and don’t bother defining them, then don’t bother with “die size” or “parallelization” either. Suck it up and make people learn the jargon. Don’t talk to them as if they were elderly people.

On Anki: 14 months on

Since January 27, 2015, the first set of cards that I had inputted on Anki, I have learned 550 kanji. No, not just stared at for 5 minutes… LEARNED!

When I first heard about spaced repetition, I thought the forum posts were too good to be true. But they prescribed the same advice: Anki. Anki. Anki. Study every day. Mine the crap out of Japanese and fling it into Anki. And I haven’t had any complaints about the system ever since that day, not because I’m an optimist who only looks at the positive side of “good” things, but rather because it’s (1) a scientifically proven model that accurately works with, not against, the dynamics of the brain, and (2) because once you set it up, you can study from wherever the heck you want. I study on my phone because it’s the most convenient, but I have to input new notes on the computer because it would take an eternity and a half doing it on a tiny phone with an even tinier keyboard.

(more…)

On the current state of learning how to code

Since 2012, numerous organizations have proliferated to teach people of all ages to begin programming. A famous example is Hour of Code.

Back then, when I was eight or so, I did not have those opportunities. You couldn’t Google “programming for kids” and have something more functional than Scratch come up. And Python was not so popular back then, much less geared toward beginners. Consequently, I had to put up with VB.NET until such resources came about, and people actually started caring just a bit about youngsters who wanted to seriously pursue coding.

But after all that – the hours of code have passed, you’ve mastered the docs after poring them over – what now? How would a nine-year-old ever start doing anything more constructive with coding than “Hello World” and bubble sorts and turtles without lurking forums and having some autonomy?

On prodigies and starting things at an early age

Back in the 70s and 80s, we got people who picked up a certain hobby at very early ages. But in this overprotective day and age, the opportunities for such things to be learned so early in life are dwindling, because the means to learn them so early are “more” illegal now, and the systems involved in such hobbies have become increasingly complicated, convoluted, and expensive.

Do you not know how many hobbies have fallen apart due to this? Photography, aviation, ham radio, and heck, even computing. Society tells us that we can’t take pictures without a $600 camera, fly a plane without tens of thousands of dollars just laying around, talk to someone 12,000 miles away without being subject to massive regulation and buying equipment easily worth $1,500, and make a simple program without enclosing it behind a layer of abstraction or reading heavily on an operating system’s API.

Back then, life was much simpler.

I’m not saying that I wish I lived in the nineties, because it had a whole new set of problems; I’m saying that people should have just as many – or more – opportunities now than they did before, despite the necessity for the newbies in life to take more time to “catch up” to mankind’s recent inventions.

Do I have a grandfather who flew planes? No. An uncle who grew up as a hacker, or who enjoyed making games in BASIC or assembly? No. A father who has plenty of money to blow on a hobby? No! Then what the heck am I supposed to do?

There’s one thing you get for free when you live in this universe, and you get it one second per second at absolutely zero cost to you: time. You have 2,207,520,000 seconds available to you during your entire life. Currently, I have used 22.8% of that number. But even then, time is immortal. I may not have the money, but if I just keep working, and working, and working toward a goal related to a hobby, I will come to it. The energy placed becomes purer and purer, because it is not energy due to anger; it is energy driven by passion, but fueled by time, not money.

Therefore, I have come to a conclusion: age does not matter. Its physical effects may place a burden on humans, and the surrounding environment lives on due to (and depends on) time, but such limitations will not last forever. Perhaps one was simply not able to learn or engage fully due to such circumstances and limitations.

But if they do not emerge because they are good at something in particular at an early age, then where will our prodigies emerge from? The recesses of an “even playing field” referred to as standardized testing? Such testing I detest, because it means nothing. It does not test competence in anything in particular, except what you sit down on a desk to do at school: algebra and English. Boring. Where is my multivariable calculus? Where is my C#, my JavaScript, my fluency in Japanese and Spanish, my loyalty, my talent, my devotion to a multitude of things, my passions, my hopes, my dreams and my desires? I do not see them on this multiple-guess sheet; I guess they do not matter nowadays.

At my high school, many are represented: the girls, in dance; the boys, in band and sports; and the programmers and hobbyists and the intelligentsia? Where are we? Do we not do service in our works, helping our classmates in homework? Are we not good enough? Is this a popularity contest? Why are we not respected? Why are we not represented or recognized?

I wish I could say I could go off to somewhere and do great things with people like me. But I can’t. I have to talk to people over the Internet who are twice my age and deal with lowly, average people down here and lowly, average problems.

And now you say the problems rests in my ego? Now I have become the enemy. Nonsense. It is not I who am your enemy; it is my humanness, my mortality, not my soul.

You wanna become a prodigy? You’re a prodigy for finding this blog, congratulations, now go away. You wanted to do C# and not VB when you were nine? Congrats, you did something, now go get a time machine and try to do it better. You wanted to win a contest? Congrats, you won the contest in my book, now burn my book and try 0th place next year.

I hate contests. I hate tests. I hate when people try to judge my life, because it takes away the pride and thus security that I can bestow myself with.

And without security, can you say that such things are really improving our lives?