Category: On Anything

The S9

I got an S9 from my father as part of a deal. I did not want the phone, but he got it anyway. This is a flagship device costing almost $1,000; not exactly a small step-up from the S4.

I have been trying not to get the phone dirty with my sweaty hands, but too late for that. It appears to be a well-built and well-designed phone, although it looks prone to damage without adequate casing.

I am not particularly fond of two things: materialism, and giving away random information to any app that wants it.

I mention materialism because nothing lasts forever – the S4, at its time, was the pinnacle of technology, but we have somehow advanced even further in five years. It is difficult to imagine what a phone will look like in five more years. One must also remember that the smartphone is an instrument designed to get things done – an integrated PDA and cell phone – although these days it serves more as a game console.

There are also immense privacy risks one is taking simply by using this phone. Android has grown to such tremendous complexity that even I, a programmer, cannot fully comprehend the full design of the Android system. There are also many more apps that grab the location, now that optimizations have been made to prevent battery overuse from obtaining a fine location. And the system has grown to become so tightly integrated that practically anything can access anything (if you allow it to).

The strongest aspect of this phone is its speed – whereas Google Maps takes 6 seconds to cold-start on my S4, it loads in about 1 to 1.5 seconds on the S9; essentially instantly.

Finally, this phone allows me to place “HD Voice,” “VoLTE,” “Wi-Fi,” and “HD Video” calls. All of these things seem to be exclusive to AT&T users, with a supported SIM card, with a supported phone (i.e. not an iPhone), in a supported location, on both sides. In essence, the feature is useless for 90% of calls[citation needed]. How much longer will it take to develop and adopt a high-quality communications infrastructure that is standard across all devices and all carriers, including iPhones? What ever happened to SIP – why didn’t Cingular give everyone a SIP address back in the day? Why do I have to use a cell phone to place a call using my number? Why do we still use numbers – when will we be able to switch to an alphanumeric format like e-mail addresses?

Yes, I understand that we have to maintain compatibility with older phones and landline via the PSTN – whatever that is these days – and we also have to maintain the reliability of 911 calls.

The walled-garden stubbornness of Apple does not help much, either. Apple simply stands back and laughs at the rest of the handset manufacturers and carriers, who are struggling to agree on common communication interfaces and protocols. Will Apple help? Nope. Their business thrives on discordance and failure among the other cell phone manufacturers to develop open standards. And when they finally agree on an open standard ten years later – yoink! – Apple adopts it instantly in response to the competition.

As for other features, I found the S9’s Smart Switch feature to work perfectly: it was able to migrate everything on my S4, even the things on my SD card (I recommend removing the SD card from the original phone before initiating a transfer). It did not ask me about ADB authorization or anything like that, so I wonder how it was able to accomplish a connection to the phone simply by unlocking it.

When Android will finally have a comprehensive backup and restore feature, however, remains beyond my knowledge. This is Android’s Achilles heel by far.

Oh, and I forgot one last thing about the S9: it has a headphone jack 🙂

On Let’s Encrypt

Let’s Encrypt has been operational for about two years now, although the project originally began in 2015. Let’s Encrypt is the saving grace of HTTPS, but exactly because it is the saving grace of HTTPS is the reason that I dislike its endorsement.

Suppose that tomorrow, a security researcher discovers a critical vulnerability to CertBot or some other part of the Let’s Encrypt certificate issuance system, and in a week, almost every Let’s Encrypt cert is going to get tossed into the CRL.

They couldn’t do it. They couldn’t possibly toss 100 million certificates into the fire, because LE has already reached a point where it is too big to fail. You can’t tell your users, who expect their website encryption to come for free, “Hey, your CA got compromised, so you’re going to have to pay $20 or more for a cert from Verisign, GeoTrust, or Comodo, because there are no other free, secure CAs available. Sorry.”

And if it comes to that, two things happen:

  1. Verisign et al. gouge prices and have the biggest cert bonanza ever, because website owners have no other choices.
  2. An HTTPS blackout happens, and half of all HTTPS-enabled websites have no choice but to fall back to regular HTTP. And if this happened with a version of Chrome where insecure browsing is banned, then you can just forget about that website unless you are a website owner and choose (1).

You have to remember the situation before Let’s Encrypt: Browser vendors, most especially Google and Mozilla, were pushing as hard as they could toward eradicating HTTP and enforcing HTTPS everywhere, in light of the Edward Snowden and NSA hysteria-bordering-paranoia. However, SSL/TLS certificate options were limited at the time: existing free certificate services had been founded long before then and were commonly suggested for people who were absolutely desperate for a free certificate, but were nonetheless unpopular among CA maintainers due to rampant abuse. In other words, on the idealistic side, people believed that every site ought to have HTTPS. But on the practical side, they asked if your site really needed HTTPS if you can’t afford a certificate and you are just serving static content.

Today, those old free CAs have been abandoned by CA maintainers in favor for the one CA to rule them all: the ISRG/Let’s Encrypt CA. I mean, we’re obviously not putting all our eggs in one basket here – if something goes wrong, we still have hundreds of CAs to go by, and if an owner really needs their HTTPS, they can just shell out $100 for a cert. That’s right, if you’re a website owner who cares more about their website than the average Stack Overflow user, you should really consider shelling out money, even though we’re sponsoring a cert service that is absolutely free! Oh, and if something goes wrong, you get what you paid for, right? My logic is totally sound!

Let me reiterate: in the case of a future catastrophe, assuming that we are enough time into the future that browsers have placed so much trust in the HTTPS infrastructure that they now put prevent casual connections to insecure HTTP websites, there are two answers based on how much money you have:

  1. You’re f**ed, along with millions of website owners. More news at 11. Maybe the folks at Ars Technica can tell you what to do. Except they’re also too busy panicking about their personal websites.
  2. Buy a cert before they raise their pri– oh, too late, they’re $50 a pop now.

So, I think the problem at hand here is the philosophy behind trust. Trust is such a complicated mechanic in human nature that it cannot be easily automated by a computer. When we make a deal on Craigslist, how do we know we’re not going to end up getting kidnapped by the guy we’re supposed to be meeting with? Is the only reason a bureaucracy trusts me as an individual because I can give them an identification card provided by the government? But how can I, as an individual, trust the bureaucracy or the government? Only because other people trust them, or people trust them with their money?

How does this tie into the Internet? How can I trust PKI, the trust system itself? What happens if I tie a transactional system – specifically the likes of Ethereum – into a web-of-trust system such as PGP? What happens if I tell people, “vote who you trust with your wallets“? What is a trustable identity in a computer network? What remedies does an entity have if their identity is stolen?

On Windows

I have held off on making a post like this for a long time now, but I think it is now the time to do so.

I thought things would improve with Windows, but for the past five years (has time really gone so quickly?), Microsoft has not done anything with their power users, effectively leaving them in the dark to “modernize” their operating system for small devices (netbooks and tablets).

Microsoft knows so well that power users are leaving in droves to Linux, so they developed the Windows Subsystem for Linux – essentially a remake of Interix – to allow people to “run Ubuntu” on their machines all while keeping the familiar taskbar on their desktops and without having to tread through the territory of repartitioning, package management, and drivers. By taking advantage of distros’ terse and hard-to-read documentation as an “advantage” for staying on Windows, Microsoft has kept the uninformed lured into Windows 10.

Let’s remember what Windows used to be primarily for: office applications. Professionals and businesspeople still use Windows every day to get their work done. They were so invested in the system, in fact, that some of them took to learn keyboard shortcuts and other nooks and crannies of the system to do work even faster (or if using a mouse was not comfortable).

Today, Windows is used for three reasons:

  1. Microsoft Office dominates the market for productivity.
  2. Windows comes with almost every personal computer that isn’t a Mac.
  3. After MS-DOS, Windows was the go-to platform for PC gaming, and it still is. As such, gamers are reluctant to move anywhere else, lest their performance decrease.

The weight of Win32’s legacy features is too heavy of a burden to keep Windows moving forward as it is. Windows 10 has a multi-generational UI: modern UI (e.g. PC settings menu) from Windows 8 and 10, Aero UI (e.g. Control Panel) from Windows Vista and 7, Luna icons (e.g. Microsoft IME) from Windows XP, and UI that hasn’t changed since the very beginning (e.g. dial-up, private character editor) from Windows 98 and 2000.

The problem is that many business users still depend on Win32 programs. Microsoft is in an extremely tight spot: they must push for new software, all the while keeping friction as low as possible during the transition process.

But if Microsoft is going to eradicate Win32, why bother developing for UWP? Why not take the time now to develop cross-platform applications? Hence why companies that care – that is, companies that do not sell their 15-year-old software as if it were “new” in 2018 – are targeting either the web or Qt (which is very easy to port). Other programs that require somewhat tighter integration with Windows are very likely to use .NET, which means pulling out C#.

Here are some reasons I still use Windows on my desktop:

  1. I am accustomed to the keyboard shortcuts. (i.e. sunk cost)
  2. Microsoft Office.
  3. I can pull out a VM if I need Linux.

However, these reasons are becoming less relevant: I am unfamiliar with Windows 10 (due to its inconsistent UI), and Windows 7 is losing support soon. Moreover, a reliable method of installing Office through Wine is being developed, and new technologies that allow hardware pass-through, such as VT-d, have caused gaming performance on a VM to match almost that of natively running Windows.

I am also tired of the support offered for Windows: those who actually know what they are talking about are called “MVPs,” and everyone else simply seems to throw canned messages for support requests. For instance, if you look up “restore point long time” on Google, the first result is a Quora question called, “Why does system restore point take so long on Windows 10?” with some nonsensical answers:

  • It’s very fast, but restoring it can take a little while. Maybe you are referring to a system backup. Download this backup software and it should be super fast.
  • Just read the article on How-To Geek and it should cover everything. Two hours is worth it to get your computer working again. And if a restore point doesn’t work, just try another one.
  • Microsoft optimizes their DLLs for speed. Also, restore points are disabled by default.
  • This is a terrible feature.
  • Here is how to create a restore point. Go to the Start menu…
  • The “multiple levels of code” is just so much more advanced in Windows 10.

None of them answer the question: why does creating a system restore point take so long?

You can probably find similar blabber for why Windows Installer takes so long, or some technical feature of Windows.

These days, I don’t really think many people know how Windows actually works. How in the world am I going to use an operating system that nobody knows how it actually works?

In comparison, any other well-supported Linux distribution has people so tough on support that they will yell at you to get all kinds of logs. With Windows, nobody really knows how to help you; with Linux, nobody wants to bother helping such a lowly, illiterate n00b as you.

As for Wine, if Microsoft did not financially benefit from it, Microsoft would have taken down the project before it ever even took off. My suspicion is that once Wine is at a stable state, Microsoft will acquire (or fork) the project and use it as a platform for legacy applications, once they have eradicated Win32 from their new Windows.

All in all, Windows has served me very well for the past years, but I have grown out of it. All the while, I wish to stay away from the holy wars fought daily in the open-source world, most especially the war between GPL and BSD/MIT, although they do seem to be getting along these days. The problems arise when MIT code is about to get linked with GPL code, and that’s when developers have to say “all right, I can relicense for you,” or, “absolutely not, read the GPL and do not use my software if you do not agree with it.”

 

The “libre” paradox

There is a great amount of discordance in the worldwide community at large regarding what kinds of software should be made free, open-source, or commercial. Even I, who am not a developer of any prominent software, have had to tackle this question myself, especially after the Aseprite fiasco regarding its conversion from commercial GPLv2 to commercial closed-source.

My empirical findings about software production models is that while commercial software can achieve results quickly and efficiently, open-source software runs on ideas and thus tend to achieve results with greater quality. Developers might be hired to write a specific program in six months, yet a developer has all the time in the world to think about the design of a personal project before even putting down a line of code. Moreover, academics (assuming, of course, that academics are the ones who work on FOSS projects, since they are too busy for a full-time job, but are keen to write code for societal good) have an affinity for peer review, encouraging only the best development and security practices, under risk of scrutiny otherwise.

It is of no surprise, then, why companies tend to cherry-pick code and design from FOSS projects to fuel something slightly better.

When a new idea is introduced for the first time, it is competition and money that drive results. When Bell Labs et al. dominated computing research for a long time, and the threatening Soviet Union begrudged the United States government to fund the research of NASA, these are prime examples of manifestations of driving factors for research and innovation.

But neither Bell Labs and NASA ever sold products to consumers. Instead, other companies were founded to fill this gap – not to create something radically new (often, when this occurs, they miserably fail or dramatically succeed), but rather to simply take the next step. The research has already been complete – just put it in a box, along with an instructions manual, and sell it to consumers. There’s nothing like it on the market, so it’s a perfect choice for consumers. Rake in the cash. Corner the market. And soon, a new company will form itself to take yet another baby step in innovation, and that one will be fruitful too.

When the innovation has become so clear and obvious to the public that it can be learned by undergraduates or any interested student, it is then time to charitably introduce the innovation to others. The modern computer has existed for a long time, yet Eben Upton and the Raspberry Pi Foundation took the “small” step of putting a SoC on a small board and selling it for $35. At the time, I don’t think it would have been easy to find a technologically up-to-date, general-purpose computing device at that price point and form factor. But because the Raspberry Pi Foundation did it, now many businesses exist for the sole purpose of manufacturing and selling low-cost single-board computers. As a result of this work of charity, computers are now easily accessible to all. What’s more, students can and must take courses covering the architecture of the modern computer, and some students are even tasked with constructing one from scratch.

Likewise, once an open-source project is done on a particular topic, that particular topic is essentially “done“. There are not many businesses out there that sell consumer operating systems anymore; if people seek a free operating system, there’s GNU. It’s done; why look further? Any improvements needed are a code contribution away, solving the problem for thousands of others as well. Why should companies strive to produce new modeling software if they must compete with programs like Blender and existing commercial software such as Maya?

My observation is that open-source software is the endgame. It is impossible for a commercial software with the same features as an open-source program to compete with each other; the open-source program will win consistently. Conversely, commercial software stems from open-source algorithms waiting to be applied, be it TensorFlow or Opus.

Basically, it makes sense to start a company to churn out commercial software if one is willing to apply existing research to consumer applications (take small steps); join a larger company to rapidly develop and deploy something innovative; or join academia to write about theory in its most idealistic form.

Under these observations, startup businesses fail because they attempt to innovate too much too quickly. The job is not to innovate immensely all at once – the job is to found a business under a basic, yet promising idea (the seed), produce results, and then continue taking small, gradual steps toward innovation. The rate of innovation will be unquestionable by investors – if you survive for two years, putting your new features and products at a healthy pace, then people will naturally predict the same rate for the coming future, and be more willing to invest.

Yet you would never find enough resources to make a successful startup for, say, building giant mechs or launching payloads into space. There’s just too much research to be done, and the many people who are capable (and in demand) to perform this research need coin to sustain themselves. In contrast, the military can pour any amount of money they wish to a particular project, and they could have a big walking mech that looks like the one from Avatar in less than 36 months. (I’d wager the military has already been working on this as a top-secret project.)

But do you see how much we have departed from the idea of “libre?” My conclusion is this: businesses do things quickly, while charitable people do things right. Once the research has been completed and the applications have been pitched and sold, it is then time to transition and spread the innovation to the general public. That is the cycle of innovation.

The problem of image formats

In the making of Animated Chatroom, I’ve been encountering a major snag: none of the popular image formats seem to fit my needs. I need an alpha channel (that isn’t 1-bit!), animation support, and good compression. Here are the candidates:

  • GIF – used since the 90s. Good compression, excellent animation support, but palletized and 1-bit transparency. I can’t use it for the complex 3D sprites, though. Dithering hacks are still used to this day to try to mask the limitations of GIF.
  • APNG – It’s meant for transparent animations, but has poor support by most libraries. Not even standardized; some browsers may be looking to remove it (already?). Many implementations implement it poorly, by stacking each PNG frame next to the other, without compressing the blocks shared by both frames, leading to an inflated file size (often more than GIF).
  • WebM – Alpha support was thoroughly devised in VP8 via the YUVA420P pixel format, yet left as an afterthought in the conception of VP9. Nevertheless, VP8 has excellent compression, but again, the consideration of supporting YUVA420P is cast aside in many implementations of FFmpeg decoders, leading to the alpha layer getting silently converted to a black or white matte.
  • PNG image sequence – Brute force solution. No inter-frame compression, leading to intolerable sizes.
  • MNG – Are there even up-to-date implementations of MNG? Does anyone even use MNG in 2018? I thought so.
  • WebP – Seems decent, but inferior compression and lossy by default.
  • FLIF – Are we really ready to enter into “the future”? While FLIF may fit the bill for literally all of my needs, there is no stable support to be found anywhere, except in the form of a native library. I need support for Python if I am to get anywhere.
  • My own format – Why in the world would I want to do this? I would rather put LZ4 on APNG than reinvent the wheel.

For now, I don’t have much of a choice for animated image support except GIF, until certain bugs are fixed in pyglet that prevent alpha support when decoding via FFmpeg.

Paranoia about the eclipse

Here it is in TL;DR format:

  • If you didn’t spend $500 on the latest ISO for this exact solar eclipse event, don’t use equipment thinking that it “blocks the dangerous solar rays.”
  • When the Moon passes over the Sun, the Sun becomes an ultra-hot beam of plasma ready to annihilate anything that it touches.
  • You are an idiot because you are a non-professional who tried to look at the Sun.
  • Don’t look at the Sun or your eyes instantly bulge out of your eyesockets and explode.
  • $100 for eclipse glasses? Well, it’s only for a few minutes, and they make looking at the sun safe, so I think they’re worth the price ;)))))
  • Stay indoors because the zombies are coming.

When I was a kid, I used to look at the Sun for a second or so at a time. Did it make me a better person? No, but my vision was unaffected: I still do not wear glasses to this day. I can’t say the same thing about these days. My eyes have become older, and when I do look at the Sun, it forms spots on my eyes where the Sun was, and the spots linger for a few minutes until they consume themselves.

If you want real advice, go here: http://web.williams.edu/Astronomy/IAU_eclipses/look_eclipse.html

Internet

Without the Internet, I would never have amassed the knowledge I currently hold today. The wild success of the knowledge powertrains of Wikipedia and Google fail to cease captivating users into learning something new every day.

Yet, I loathe the Internet in numerous ways. It’s become what is virtually (literally virtually) a drug habit, and in a way worse than a drug habit because I depend on it for social needs and information. Without it, I would lose interesting, common-minded people to talk with, as well a a trove of information that I would have to buy expensive books for.

But without the development of the Internet, what would humanity be…? I suppose we would return to the days where people would actually be inclined to talk face-to-face, invite each other to their houses, play around, sit under a tree reading a book, debug programs, go places, make things. It wouldn’t necessarily be a better future, but it would certainly be a different one. If it took this long to develop the Internet (not very long, actually), imagine the other technologies we are missing out on today.

And then there is the problem of the masses. The problem lies not in the quantity itself, it’s that attempting to separate oneself from the group merely attempts to imply elitism. And you end up with some nice statistics and social experiments and a big beautiful normal model, with very dumb people on one end and very intelligent people on the other.

This wide spectrum means that conflict is abound everywhere. People challenge perspectives on Reddit, challenge facts on Wikipedia, challenge opinions on forums, challenge ideas on technical drafts and mailing lists. And on YouTube, people just have good ol’ fistfights over the dumbest of things.

On the Internet, the demographic is completely different than in human society, even if the Internet was supposed to be an extension of human society. The minority – yes, those you thought did not exist: the adamant atheists, the deniers, the libertarians, the conspiracists, the trolls – suddenly become vocal and sometimes violent. The professionalism with which the Internet was designed in mind is not to be found on any of the major streams of information. This is not ARPANET anymore. These are not scientists anymore studying how to run data over wires to see if they can send stuff between computers. These are people who believe the Internet is freedom at last. Freedom to love, freedom to hate; to hack, to disassemble, to make peace, to run campaigns, to make videos, to learn something, to play games, to make opinions, to argue, to agree, to write books, to store things, to pirate software, to watch movies, to empathize, to converse, to collaborate, or just to tell the world you really hate yourself.

Thus, I am a victim of freedom and a slave to it. My friends do not talk to me anymore. I am just left with solitude and a keyboard.

Some ideas

Concept of AI itself

I’ve glanced at many papers (knowing, of course, that I know very little of their jargon) and concluded that the recent statistical and mathematical analysis of AI has simply been overthought. Yet the theory of AI from the 70s and 80s delves to entirely conflicting perspectives of the driving force of AI in association with the morality and conscious factors of the human brain.

Think about the other organs of the body. They are certainly not simple, but after 150 years, we’ve almost figured them out, how they work mechanically and chemically. The challenge is how they work mathematically, and I believe that an attempt to determine an accurate mathematical representation of the human body would essentially lead to retracing its entire evolutionary history, up to the tiny imperfections of every person across each generation. Just as none of our hands are shaped the same, our brains most likely are structured uniquely, save for its general physical structure.

I conjecture that the brain must be built on some fundamental concept, but current researchers have not discovered it yet. It would be a beautiful conclusion, like the mass-energy equivalence that crossed Einstein’s mind when he was working in the patent office. It would be so fundamental that it would make AI ubiquitous and viable for all types of computers and architectures. And if this is not the case, then we will adapt our system architectures to the brain model to create compact, high-performing AI. The supercomputers would only have to be pulled out to simulate global-scale phenomena and creative development, such as software development, penetration testing, video production, and presidential-class political analysis and counsel.

Graph-based file system

Traditional file systems suffer from a tiny problem: their structure is inherently a top-down hierarchy, and data may only be organized using one set of categories. With the increasing complexity of operating systems, the organization of operating system files, kernel drivers, kernel libraries, user-mode shared libraries, user-mode applications, application resources, application configurations, application user data, caches, and per-user documents is becoming more and more troublesome to attain. The structure of POSIX, in the present, is “convenient enough” for current needs, but I resent the necessity to follow a standard method of organization when it introduces redundancy and the misapplication of symbolic links.

In fact, the use of symbolic links exacerbates this fundamental problem of these file systems: they work on a too low level, and they attempt to reorganize and deduplicate data, but simply increasing the complexity of the file system tree.

Instead, every node should be comprised of a metadata as well as data or a container linking to other nodes. Metadata may contain links to other metadata, or even nodes comprised solely of metadata encapsulated as regular data. A data-only node is, of course, a file, while a containerized node is a directory. The difference, however, is that in a graph-based file system, each node is uniquely identified by a number, rather than a string name (however, a string name in the metadata is to be used for human-readable listings, and a special identifier can be used as a link or locator of this node for other programs).

The interesting part about this concept is that it completely defeats the necessity of file paths. A definite, specific structure is no longer required to run programs. Imagine compiling a program, but without the hell of locating compiler libraries and headers because they have already been connected to the node where the compiler was installed.

The file system size could be virtually limitless, as one could define specifics such as bit widths and byte order upon the creation of the file system.

Even the kernel would base itself around the system, from boot. Upon mount, the root node is retrieved, linking to core system files and the rest of the operating system; package management to dodge conflicts between software wouldn’t be necessary, as everything is uniquely identified and can be flexibly organized to correctly define which applications require a specific version of a library.

In essence, it is a file system that abandons a tree structure and location by path, while encouraging references everywhere to a specific location of data.

Japanese visual novel using highly advanced AI (HAAI)

This would be an interesting first product for an aspiring AI company to show off its flagship “semi-sentient” AAI product. Players would be able to speak and interact with characters, with generated responses including synthesized voices. A basic virtual machine containing an English and Japanese switchable language core, a common sense core (simulating about ten years’ worth of real life mistakes and experiences), and an empathy core (with driver, to be able to output specific degrees of emotion) should be included in the game, which developers then parametrize and add quirks for each character, so that every character finishes with a unique AI VM image.

In fact, the technology showcased would be so successful that players would spend too much time enjoying the authentic human-like communication, getting to know the fictional characters too well, warranting the need to place a warning for players upon launching the game (like any health and safety sign) stating that “This game’s characters use highly advanced artificial intelligence. No matter how human-like these fictional characters interact, they are not human beings. Please take frequent breaks and talk to real, human people periodically, to prevent excessive attachment to the AI.”

On the regulation of AI

It seems so futile the attempt of trying to regulate AI, something that doesn’t even truly exist yet. We don’t have AI we can call sentient yet. The rationale is well-founded, but what we’re really trying to say is, “We know we can make something better than us in every way imaginable, so we’ll limit its proliferation so that humans are superseded not by AI, but by our own demise.”

So after the many times this has been done ad nauseum, it looks like the “Future of Life Institute” (as if they were gods who possibly have any power to control the ultimate fate of humanity!) have disseminated the Asilomar AI Principles (Asilomar is just the place the meeting was held. Apparently, these astute individuals really like the beach, as they had gone to Puerto Rico in their previous conference two years prior). They have garnered thousands of signatures from prestigious, accomplished AI researchers.

The Asilomar Principles are an outline of 23 issues/concepts that should be adhered to in the creation and continuation of AI. I’m going to take it apart, bit by bit.

 

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

What is “undirected intelligence”? Does this mean we can’t throw AI at a big hunk of data and let it form its own conclusions? Meaning, we can’t feed AI a million journals and let it put two and two together to write a literature review for us. And we can’t use AI to troll for us on 4chan.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

They throw this word “beneficial” around but I don’t know what exactly “beneficial” means. Cars are beneficial, but they can also be used to kill people.

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

You get programmers to stop writing lazy, dirty, unoptimized code that disregards basic security and design principles. We can’t even make an “unhackable” website; how could we possibly make an AI that is “unhackable” at the core?

  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?

You can’t. Robots replace human capital. The only job security that will be left is programming the robots themselves, and even AI will take care of patching their own operating systems eventually. Purpose – well, we’ve always had a problem with that. Maybe you can add some purpose in your life with prayer – or is that not “productive” enough for you?

  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?

Legal systems can’t even cope with today’s technology. Go look at the DMCA: it was made decades ago, back in the age of dial-up, and is in grave need of replacement to make the system fairer. You can post videos within seconds today that most likely contain some sort of copyrighted content on it.

  • What set of values should AI be aligned with, and what legal and ethical status should it have?

Most likely, they will be whatever morals the AI’s developers personally adhere to. Like father, like son.

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

Like lobbying? I don’t think I’ve ever seen “constructive and healthy exchange” made on the Congressional floor. Dirty money always finds its way into the system, like a cockroach infestation.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

Doesn’t this apply to pretty much everything research-related? Oh, that’s why it’s titled “research culture.” I’ll give them this one for reminding the reader about common sense.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

I almost interpreted this as “AI should avoid being racist.” Anyhow, this is literally capitalism: competing teams will cut corners and do whatever they can to lead in the market. This is probably the liberal thinking of the researchers leaking into the paper: they are suggesting that capitalism is broken and that we need to be like post-industrial European countries, with their semi-socialism. In a way, they’re right: capitalism is broken – economic analysis fails to factor in long-term environmental impacts of increases in aggregate supply and demand.

Ethics and Values

Why do they sidestep around the word “morals”? Does this word not exist anymore, or is it somehow confined to something that is inherently missing from the researchers?

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

“Safety first.” Okay…

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

You want a black box for your AI? Do you want to give them a room where you can interrogate them for info? Look, we can’t even extract alibis from human people, so how can we peer into AI brains and get anything intelligible out of them?

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

This is not a place where AI should delve into, anyway. We will not trust AI to make important decisions all by themselves, not in a hundred years.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

Meaning you want to be able to sue individual engineers, rather than the company as a whole, for causing faults in an AI. Then what’s the point of a company if they don’t protect their employees from liability?!

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

What if AI finds itself to align better to values than humans? What if the company that made an AI got corrupt and said to themselves, “This AI is too truthful, so we’ll shut it down for not aligning to our values.”

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

Debatable topics like abortion come to mind. Where’s the compatibility in that?

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

Again, we don’t even have control over this right now, so why would we have control over it in the future with AI?

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

And it probably will “curtail” our liberty. Google will do it for the money, just watch.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

What a cliche phrase… ohhh. It’s as if I didn’t include this exact phrase in my MIT application, not considering how gullible I am to not realize that literally everyone else had the exact same intentions when they applied to MIT too.

When Adobe sells Photoshop, is it empowering people to become graphic artists? Is it empowering everyone, really, with that $600 price tag? Likewise, AI is just software, and like any software, it has a price tag, and the software can and will be put for sale. Maybe in 80 years, I’ll find myself trying to justify to a sentient AI why I pirated it.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

Reminds me of the imperialist “Greater East Asia Co-Prosperity Sphere.” Did Japan really want to share the money with China? No, of course not. Likewise, it’s hard to trust large companies that appear to be doing what is morally just.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

I can’t tell Excel to temporarily stop turning my strings into numbers, as it’s not exactly easy to command an AI to leave a specific task to be done manually by the human. What if it’s in a raw binary format intended to be read by machines only? Not very easy for the human to collaborate, is it now?

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

I think at some point, the sentient AI will have different, more “optimal” ideas it wants to implement, or shut down entirely.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Tell that to our governments, not us. Oops, too late, the military has already made such weapons…

Longer-term Issues

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

“Assumptions” including this entire paper. You assume you can control the upper limit of AI, but you really can’t.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

You don’t say.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

Because such efforts show that human labor is going to be deprecated in favor of stronger, faster robotic work…?

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Every person will have their own “superintelligence.” There will not be one worldly superintelligence until the very end of human civilization, which ought to be beyond the scope of this document, since we obviously can’t predict the future so far.

 

You can make pretty documents outlining the ideals of AI, but you must be realistic with your goals and what people will do with AI. Imposing further rules will bring AI to a grinding halt, as we quickly discover the boundaries that we have placed upon ourselves. Just let things happen, as humans learn best from mistakes.

On Aseprite

Once upon a time, my uncle wanted to give me Photoshop CS5 4 as a present for my tenth birthday. However, as he did not bring the physical box along with him when he visited (he was a graphic artist at the time), he ended up installing a cracked copy when I wasn’t on the computer. I kept whining that it was illegal, that he couldn’t do that and now there were going to be viruses on my computer, but he explained calmly that there was no other way since he didn’t have the CD with him. So I said okay, vowing I’d uninstall it later, but after a while of using it, it kind of stuck, and no malware appeared (to this day, it is to my surprise how he managed to find a clean copy so quickly). The only condition, as he stated, was that I could not use Photoshop for commercial use – basically, you can’t sell anything you make with this cracked Photoshop. Fair enough.

Even so, I steered away from Photoshop, as anything I made with it felt tainted with piracy. Later, I’d use it a little more, but I placed little investment in learning the software, as I had made no monetary investment in the software at all. I used Paint.NET instead, and despite its shortcomings (no vector mode, no text layers, half-decent magic wand, no magnetic lasso), the shortcuts felt familiar and the workflow remained generally the same as that of Photoshop. People also recommended Gimp as “the only good free alternative to Photoshop”, but I didn’t like Gimp because literally every shortcut is different, and the workflow is likewise totally different. The truth was that Photoshop was Photoshop, and Gimp was Gimp.

Yet I sought to do pixel art. This was supposed to be an easy endeavor, but Paint.NET was an annoying tool. Eventually, I found David Capello’s Aseprite and had no trouble adapting to the software, as it was designed for pixel art.

I had few complaints, but they had to be dismissed; after all, this was software still in the making. Only relatively recently was symmetry added, and the software was made more usable. I also liked its $0 price tag – if you were competent enough to compile the binaries yourself. And because the software was GPL, you could even distribute the binaries for free, even though Capello charged money for them. Capello was happy, and the FOSS community was happy. Some even tried setting up Aseprite as an Ubuntu package in universe, although it generally wasn’t up-to-date, due to stringent updating guidelines.

Until the day Capello decided to revoke the GPLv2. I knew the day was coming and wasn’t surprised when the news came. Plop, the old GPLv2 came off and subsequent versions were replaced with a license of his making, forbidding distribution of binaries and further reproduction. The incentive of making pull requests to add features was gone – after all, you were really just helping someone out there earn more money, as opposed to contributing to a genuine open-source project. Of the 114 closed pull requests, only 7 are from this year (as of the time of writing).

In fact, the entire prospect of Aseprite continuing as an open-source project collapsed, for Capello had bait-and-switched the FOSS community to support his image editor because it was “open source,” without informing clearly of his ulterior motives to drop the license in the future. Licensing as GPLv2 was, after all no mistake as opposed to choosing GPLv3 – perhaps this had something to do with being compatible with Allegro’s license, or more permissibility for other contributors? No. This had to do with a clause that GPLv3 had, but GPLv2 did not: the irrevocable, viral release of one’s code to the open-source realm. Without this important clause, and because he was the owner of the code, Capello could simply rip off the old license and slap on a more proprietary one, which is exactly what he did.

The argument in defense of Capello was, “Well, it’s his software, he can do whatever he want.” After all, he was charging for the program, anyway. But the counterargument is that the GPL is intended by the Free Software Foundation to promote the open-source movement, not to deceive users into thinking your for-profit project upholds the ideals of free and open-source software, especially that open part: free as in freedom, not just free as in beer. Now there is not only a price tag on the product, but also a ban on distributing binaries, thanks to this incredible decision to make more money.

Yes, I know someone has to keep the lights on. You can do that in many ways, but one of them is not by turning your “open-source” project into downright proprietary software. Now, people demand more and contribute less – why should they pay when there are less results and less features being implemented? The cycle of development decelerates, and putting money into Aseprite is now a matter of business rather than a matter of gratitude.

I don’t remember how to compile Aseprite at this point. I remember it being mostly a pain in the butt having to compile Skia, but that’s about it. Thus, I have no more interest in using Aseprite.

Entering college, Adobe is offering absolutely no discounts on its products. It’s almost as if they want kids like me to go ahead and pirate Photoshop again. There is no way I am going to afford a single program with the price of an entire computer. Yes, I know, Aseprite is obviously cheaper than Photoshop, but why should I buy a pixel editing tool when I can get something that can do all kinds of image manipulation?

A slap to the face goes to the general direction of Adobe and David Capello. Good job for keeping the image editing market in the status quo.