Category: Projects

Making an e-bike with display

This is an explanation of another one of those ambitious projects which I really want to do, but I have neither the experience nor the people to actually do it with.

I hate rough inclines: they kill my legs. The number one detractor to riding a bike in my childhood was that in my neighborhood, there are some very steep inclines. It made riding a bicycle not a very pleasant experience, and my father never wanted to bring me to a park for me to ride my bike on, so in the end, I never really used my bike.

However, given the fact that using a bicycle is the only practical mode of rapid transit in the city where I attend college, I want to actually start riding a bike again. And after a year or so of riding that bike, I want to make the riding experience cooler.

First, I want to retrofit a brushless DC motor to the drive shaft; something rated for around 600 W of power output. If it is not possible to attach it directly to the hub, I’ll attach it to the shaft with a belt; ideally, a belt with the quality of a timing belt. But I hope I don’t have to do this, because if so, I’d have to play with the tension, pitch, and so on of the belt, which would be problematic.

Next would be the electronic speed controller and charge controller. I want the controllers to automatically switch to a regenerative mode for slight brakes by bypassing the ESC, inverting the poles of the motor, and taking the current straight to the charge controller. Then, on pedaling, the controllers should switch back to drive mode. This behavior would be directed by the main controller, since regenerative braking is a non-essential feature.

Speaking of a main controller, what exactly is it? The main controller is the Arduino or whatever microcontroller I decide to use that is wired to the ESC and charge controller, but is not required to be run in order to operate the bike in case of a fatal error or low battery charge. It would run a real-time operating system with prioritized continuous tasks and many, many interrupt routines. These would be its high-level tasks, in order of descending priority:

  1. Emergency brake applicator. Continuously checks “emergency stop” button, dead man’s switch (clipped to clothes, but the clamp is limited enough such that it cannot be clipped to the handlebars or other part of bike; then the other end of the clamp is magnetically attached to a port on the control box), or >95% application of brakes while moving at a formidable speed.
  2. 10 Hz alternating pulse. This signal is generated and passes through some kind of failsafe circuit, which then determines whether or not the ESC should be enabled. The alternating pulse ensures that the main controller is not “frozen” on an operation that could prevent it from stopping the motor. This assumes that as long as the pulse is alternating, the controller is working as intended.
  3. Speedometer. It simply samples the speed at which the back wheel is spinning and determines the current speed.
  4. Speed regulator. This task scales back the output DC current based on how close the bike is to the speed limit. This task can be overridden, but it’s not a good idea to do so.
  5. Brake detector. This task detects the brake application percent. The actuation of the brakes is completely analog, but if it is significant, the main controller can signal to go to regenerative mode.
  6. Pedal detector. This task simply detects how much positive force is being applied on the pedal and sets the target DC current proportional to this force (clamped, of course).
  7. Odometer. It uses the same sampling metric as the speed counter, but it increments the distance by the circumference of the wheel. After around .2 miles, it writes to the EEPROM. I suppose I could use a pointer to level the wear on the flash, or I could use a preexisting file system designed specifically for microcontrollers.
  8. Display driver. This assumes that there exists a layer of abstraction between the UI and the display itself.
  9. Sound driver. Just for basic beeps and boops.
  10. Main UI. This handles button interrupts (the calls of which are passed to the foreground user task), the failsafe UI (if all user-mode UI tasks are dead), and the UI toolkit itself.
  11. Foreground user task. Dashboard, options, etc. Must not directly control motor operation.
  12. Background user tasks. Battery icon, clock, etc. Must be non-critical.

The e-bike’s main controller would require a key for operation and then a simple on/off SPST switch located in front of the handlebars. The display would ideally be a Hitachi HD44780-esque LCD, but it could also be the Nokia-style LCDs, although these might be a little too small. There will be six buttons: on the left below the display, there will be four directional buttons laid horizontally (in a style familiar to Vim users or Dance Dance Revolution/StepMania players), and on the right, a back button and an enter button. The display and controls need to be water-proofed.

Instead of using heavy deep-cycle lead-acid batteries, I’d just opt for using LiPo cells, which are ubiquitous in hobby usage for high-performance electronics. Industry professionals are not fond of LiPo cells because they are comparatively more dangerous and volatile than other types of cells, and this increased risk cannot be tolerated in mass production. However, since I am not mass-producing e-bikes, it should be OK to accept the risks and enjoy the power of lightweight LiPos, as long as their charging is supervised closely.

This e-bike also needs a brake light, signal lights, and an LED headlight with a white color temperature rather than blue.

That’s all I want the bike to do. All of this, but I want to keep it street-legal and be able to prove that it can be safely ridden in busy streets under the consideration of various fail-safe mechanisms, including a speed regulator that requires manual override.

Sadly, I don’t know if I will ever be able to make this contraption.

Where’s the good backup software?

For *nix users, the answer is easy: rsync. For Macintosh users, the answer is even simpler: Time Machine (“time ‘sheen”). For Windows, the answer is a convoluted mess of choices. And the problem is that none of those choices give everything you want.

Why can’t you have everything? Here’s all of the things a backup program needs:

  • Permissions. If you can’t preserve your metadata, forget about making faithful backups. POSIX and Windows permissions are very different, but they still deserve the same love.
  • Resilience. The restore part of a program should never produce a fatal error, unless a backup has been corrupted beyond repair. If a part has been corrupted, ignore that part, notify the user that a corrupted portion was ignored (noting, of course, what the corrupted portion actually is), and continue with the restore process.
  • Compression. Many would argue that compression only makes the backup more difficult to restore, yields a minimal return in efficiency, etc. However, this can make a large difference when uploading from a personal home network to a storage service, where storage costs are billed by the gigabyte. I don’t know about you, but $1 a month was more than my tax return this year.
  • Encryption. Everyone’s got their tinfoil hats on, how about you?
  • Incremental backups. People are not going to do full backups every week. This is a waste of time, storage space, and energy, since most files would be redundantly stored.
  • Block-level. If you modified a 20 GB VHD file, are you going to copy that whole thing on every weekly incremental backup? No, you’re going to copy the differences in blocks/parts of that file.
  • Archivable. It appears most people choose either image-based backups or file-based backups. I personally prefer at the file level, but this should not mean “copy millions of files and spew them on the target directory.” The backup should be neatly organized in, say, 50 MB parts that can be easily uploaded to a cloud service as part of a future backup plan. Or, it can just be made as a monolithic 800 GB file. The former is workable by most consumer file services, while the latter is most convenient for more enterprise-oriented services like Amazon Glacier.
  • Resumable. Most backup programs hate it when you shut down your computer for the night. Yet none of them seem to understand that this is exactly what shadow copies are for. Even after shutting down the computer, shadow copies do not magically change. Yet the software goes, restarts your entire backup, and creates yet another useless shadow copy for the mere sake of not wanting to touch files in use and making the most up-to-date backup possible.
  • Snapshots. Let’s say I don’t want to restore my whole computer; I just want to see an old file and its version changes over time. Most backup programs will not let you do that, citing that it is “too complex.” No, it’s not. Track the files the software backed up, using a tiny database like SQLite. There, you can store checksums, file sizes, previous versions, and so on and so forth. The suffering ends there. The end user can view a snapshot of the computer at a certain point in time, or view the history of a specific file, perhaps with diffs (binary diffs if the backup software is user-friendly enough).
  • Low profile. What is CloudBerry Backup using 2.7 GB of memory for? Just flapping around? No! Decent backup software should use 100 MB of memory, tops. Leave the heavy RAM consumption to browsers, games, and servers.
  • Integration. This backup software should be robust enough to make anything either a source or a destination for backups, notwithstanding the limitations of each backup medium.
    • Least liquid: Offline local storage; Amazon Glacier; Google Coldline
    • Somewhat liquid: FTP (due to its slow transfer speed of many files and inability to perform multipart transfers); most consumer storage services
    • Most liquid: iSCSI SANs; high-availability storage services
  • Drive path-agnostic. A backup software should never, ever depend on drive letters to figure out backup sources and targets.
  • Predict drive failure. This goes somewhat beyond the scope of a backup software, but there should be at least some kind of periodic SMART monitor to inform and warn a user of a drive that is indicating signs of failure. Yes, put a big popup on the notification bar with a scary message like “Your drive might fail soon” or just outright “Your drive is failing.” Show it to them the first three days, make it go away, and then show it to them the next week. Of course, the notification can be removed for a specific drive, but it will require them to read a message about possibly losing data on the failing drive, wait 5 seconds to close the dialog, and now they never have to see the dialog for that drive again.
  • Recognize cache folders. Here’s what you need to do: just stick that CCleaner scanning stuff into your product. Make the default backup plan ignore whatever CCleaner would usually clean up. Caches can add up to be gigabytes of size, and many users do not even care about including them in their backups, because all they want are their programs and documents. However, there is that one company that might say, “no you can’t ignore cache folders because we need a perfect file-level backup of the system tree.” (My argument would be to use CloneZilla and do it at the image level – but fine.)
  • Import from other services. No, I don’t care much about Acronis, Veeam, or other proprietary solutions. What I do care about, however, are the crappy Windows 7 Backup and Restore backups, dd “backups,” and other image-level backup formats. Don’t just import the backups: import file history, recompress them, preserve timestamps. Give them the full treatment, and put them neatly in the new backup format as if it really were an old backup.
  • Responsive (and responsible) backend. Big enterprise backup software uses a UI frontend, which merely communicates with the service backend. This is generally a good design. However, when the backend decides to quit, the UI frontend goes into limbo and does not respond to any commands, instead of providing a reasonable explanation to what is happening with the backend, while the backend does not attempt to halt whatever blocking operation that is taking too long. The gears just grind to a halt, and nothing can get done on either side.
  • Don’t delete anything without asking. No, I don’t even want an auto-purge functionality, and if you do, for the love of God, make it a manual operation. There is no reason to keep purging things constantly, unless you have a disk quota to work under – in that case, the software should determine what is best to purge (start with the big stuff, at the earliest backup) to meet the size requirement.
  • Only one backup mode. That backup mode better be good, and it should have a hybrid format.
  • Open-source format. The software itself may not be open-source, but you are essentially ensuring that someone out there can make a restore software that can always be compatible with the latest and greatest operating systems.
  • Bootable. Where are you going to make your restores from? A flash drive running Linux with an ncurses interface for your backup software, obviously. You could, of course, allow backups from that same bootable drive, in the case of an infected drive or as part of a standard computer emergency response procedure – but eh, that’s really pushing it. Just restores will do fine.
  • Self-testable. Make sure the backups can actually restore to something.
  • Exportable. One day, your backup software will not be relevant anymore, so why bother locking in users to your format? Make it so that they can export full archives of their backups, with a CSV sheet explaining all of the contents of each archive.

At the end of the day, users just want their files safe and sound, so keep the software as close to the fundamentals as possible, and allow others to make tools around the backup software if additional functionality is needed.

Ideas for a new operating system

As I was watching Druaga compare the Windows system folder with the Mac system folder (which is probably just a fake frontend to a really ugly backend), I suddenly began to pace around, thinking about that graph-based file system again. I also thought about the “organization” defined by POSIX: is /usr/, /var/, /etc/, /opt/, /lib/, etc. really understandable? There’s clearly a conflict here: we want an organization that caters to both readability by the user, the core system, its components, and applications.

I speculate the creation of a new operating system in the next generation. Currently, I believe that it is nearly impossible for a new kernel to be created due to the excess complexity that semiconductor companies have thrown into their electronics, rendering operating system support for PCs effectively exclusive to Linux and Windows since those are the only two systems that they really test.

Why do we need a new operating system? In truth, we really do not. The conservatives will say, “Then why go into so much effort making an operating system that nobody will use, when there already exists one that works?” I’d ask the same thing about GNU Hurd, ReactOS, and so on.

It’s for the future. You see, there is a fatal flaw in the current operating system organizational architecture: it’s a limited directed graph that surmounts to a tree. It works under the premises that system data can be organized as papers and files inside folders. But the truth is that such data can be organized in various ways, but still not necessarily in a hierarchical or tag-based structure.

An undirected graph-based file system would work like the brain, and using more fundamental pieces of data that could allow the cluster size to go down to perhaps 2K. It would be incredibly difficult to visualize, but what you could still do is place sections of this data system in different physical locations, such as a server.

(more…)

Tape drive VCR, part 1

One day I had this amazing idea! I was looking through the tape drives for sale, and as usual they were over $1,200 for LTO-5 or LTO-6 tape drives, which are the only generations that can match the current hard drive market. There are so many unused VHS tapes, and with the untapped potential of analog storage media, you could store digital media in these cassettes! After all, they’re just tapes! You could make… a tape drive using a VCR!

All right, I think you’ve got the sarcasm and naivety of my thought process. I mean, if you think about it only for a few seconds, it’s just silly humor. But when it remains within your mind for days on end, wondering whether or not it truly is possible, you feel as if the only way to find out is to try it yourself.

Let’s take a closer look at this incredulous idea. The first and only popular stab at this was ArVid. It was basically this Russian ISA card that ran composite video to your VCR, and that was it. It could store data at a speed up to 325 kbps, and with some simple math we come up to almost exactly 2 GB on an E-180. And you know what, a lot of people said “yeah, I guess that’s reasonable,” and they stopped there.

But there are some huge limitations to ArVid, that could have allowed it to increase in data retention. First, it has only two symbols: luma on and off (!!!), which already makes the storage incredibly inefficient! It uses some Hamming for ECC but that’s about it, according to Wikipedia. Now, I’m no expert here on signal processing (just started seriously reading about this an hour or two ago), but with QPSK or QAM, we can make it significantly more efficient. So, screw ArVid.

We also don’t need an additional card to bring the analog data over to the VCR. We can use the sound “card” that is already built into the motherboard to produce the analog signals we need, and at an acceptable sample rate too (while “sample rate” doesn’t exist when we’re talking about pure analog signals, we do still need to convert digital signals over to analog, but the sound card can only support up to 96 kHz or 192 kHz, thereby limiting our symbol rate). A separate sound card might still be convenient, however, given that this method may hinder a user’s ability to use sound at all (or the user may accidentally trigger a system sound that interferes with the data throughput).

So, how much data exactly do we think a VHS can carry? I think that in a perfect world with an ideal design, it will be somewhere between 80-160 GB. However, formal calculations based on the modulation to be used will be required in order to prove this, so I will not talk much about it.

Instead, I’ll discuss the practicality of this design. Yes, you could hack a remote control and stick it to the VCR, and that would be the interface for communication. Haha! But to be honest, I’m not really willing to destroy my VCR and remote just to figure out how well this is going to work. The solution, then, becomes fairly clear: just have the user be instructed on what to do. The user would note where a datum is stored and all he would do is just move the head right before it and hit “read” right before the data is reached. The signal would be aligned and processed perfectly.

Alternatively, we can tell the user to “initialize” the VHS by having the software sprinkle position markers across the tape. They don’t have to be exact placements, but they give the software an idea of what spaces have been consumed and where to go based on the last read position marker, assuming that the software is tracking where data has been stored in some sort of external master file table. This can then be turned into simple “rewind for about 20 seconds” commands given to the user. The user would play back a little bit, which would allow the software to give feedback on how close they are to to the data (and if actual data is being played back, then this should be detected and the user should be instructed to go back to the beginning of the data).

I’ve been taking a look at GNU Radio and I think this should give me a fair estimation of what modulation method(s) to use, and how much noise is expected. We’re dealing with VHS, which is great, because the expected noise is extremely low.

Soldering

The big problem of soldering is the resources. If you don’t have the right materials, the right solder and the right flux, you’re going to end up botching the whole thing like I did.

View post on imgur.com

It was fairly obvious that I was going to mess up. But hey, you know what they say: if you must fail, fail spectacularly!

Oh well, eventually I’ll have this 20×4 LCD set up and wired to the Elegoo Uno R3 (an Arduino/Genuino Uno clone). Unfortunately, I don’t have those easy-to-solder pins, which is why I have to do this ugly hack soldering the cables in. Hopefully the LCD doesn’t turn out to be destroyed by the heat.

The hackathon

The hackathon was okay. There were some regrettable moments and some unforgettable ones.

When I arrived there, I was still pretty miffed that my friends had ostracized me from their group. I came in with the “I’m-scared-of-tall-white-people-with-glasses-and-braces” look, but to be honest, looks turned out to be deceiving when the final products appeared.

I saw some kids from my summer engineering program, at least the ones that mattered. There was a noticeably smaller amount of kids than were expected, but this turned out to be quite advantageous.

As with any group project, I was the mastermind, and everyone else just sat and watched me do all the work. More specifically, they played League of Legends for hours on end. That morning I did not have an idea for a project, but before I came, I suddenly recalled my need of an all-encompassing cloud storage solution, so I decided to call it UltronCloud. I mean it’s not ever going to be finished, so just give it some joke name.

The environment was excellent; this was the college my brother goes to. It’s private but the tuition turned out to cost less than that of a public university, and needless to say, it seems that every penny of it was spent wisely on the infrastructure and architecture. I got a huge-screen television all for myself, so I was able to use the television as my primary monitor, which made it very easy for my eyes as the night progressed.

The hackathon was great, or rather should have been great. But I think I did not take advantage of the opportunities; there were mentors who were teaching how to develop for mobile platforms. I also didn’t take as many breaks as I should have; I strained myself in order to squeeze every hour of the venue, so I didn’t have as much fun with other kids. On top of that, the challenges I was facing when making the project were serious yet to a ridiculous extent. Some problems took hours to be solved, only to be met with yet another problem.

This following section is part of an issue I made on the repository of the library I used, because the following morning, I was so mad that I had wasted all this time for nothing. Once again, I hold nothing against the developer of the library:

Literally every step of the way has been riddled with bugs and other quirks and undefined behavior, even when following the instructions to the letter and trying it on two different Windows 7 x64 machines. Needless to say that I wasted my time trying to make a frontend out of this library. Maybe you can figure out whether the library hates me or if it’s just that unstable.

The first problem was when DokanCloudFS failed to load assemblies when I set the build configuration to NuGet-Signed. If I tried cleaning the build, it would still error out. If I tried changing the build config back to regular NuGet, yet again it would throw the exact same exception. The solution was to nuke the entire project, keep it in the default configuration and never touch it again. This alone cost me a few hours to figure out.

And alas, very shortly later, more problems arose. My mounted Google Drive appeared as a drive, but all interactivity with it was completely blocked, thanks to a vague exception thrown repeatedly as shown in the console:

Exception thrown: 'Google.GoogleApiException' in mscorlib.dll
...
Exception thrown: 'Google.GoogleApiException' in Google.Apis.dll
Exception thrown: 'System.IO.InvalidDataException' in SharpAESCrypt.dll
...
Absolutely no stack trace and Visual Studio did not even bother to break.

And this was after I had compiled CloudFS, put in secret keys for GDrive, copied the output DLLs to the DokanCloudFS Library folder, and assured that it had access to the Drive API by turning it on in the console, and waited a few minutes for it to “enable”.

So I said, screw it! Let’s use OneDrive instead, thinking that somehow it would ease my pain. Nope. Same spiel. Except Microsoft was taking me to some OAuth2 auth link that would just take me to a blank page. After a bit of research I found out that I had to add “mobile” as a platform in order for me to even have an OAuth2 login page. Okay, so when it asks me, “Let this app access your info?” and the usual permissions and I click “yes,” ….it just opens another browser window to do the exact same thing. I click yes again, and the window reappears ad infinitum. And instead of the `GoogleApiException` I get a `System.Security.Authentication.AuthenticationException in mscorlib.dll` along with a `System.AggregateException` which VS *should* be breaking to tell me about, but it’s not doing squat.

By this time I’m forgetting about even running the DokanCloudFS.Mounter example and instead just building hacks to bridge the frontend with the library, using the mounter program as an example because there’s absolutely no documentation that comes with it.

And as of the time of the writing of this issue, I’ve spent sixteen hours trying to get all this to work just to make a frontend that will mount OneDrive, Google Drive, etc. in unison.

I racked my brains so hard that instead of pulling the all-nighter as I had intended to, I decided to sleep for three hours. I didn’t bring a pillow nor a sleeping bag, so I was in for a really nice sleeping experience. Thank the lady who showed me where the cot was in the nice, dark, quiet room; all the couches were taken. So between the hours of 3 AM and 6 AM, I decided to rest and try to figure out what to do with the project. Now, the resting period was important because when I woke up (I think I only achieved REM sleep for a few minutes), I did not feel disillusioned as I usually am when I am sleep deprived (the reason for this is that the image of sunset is still ingrained in my brain, so it gives the impression that it was a very short night and that I will have to sleep during daytime hours to compensate for this).

When I woke up, I returned to my workspace. My teammates were still playing League as they were before I went to sleep, and I sat down and looked at Visual Studio. I tried to begin hacking together some sort of interface to figure out if any functionality is possible, but it was futile. By 10 AM, I simple gave up. I failed.

I had really been looking forward to the hackathon, and I met quite a few people there. But I was not met with a stroke of luck, and the hackathon was not as enjoyable as it could have. If I went back in time, I could have done all the right decisions: convince my friend to let me in the team, bring a pillow and a surface to sleep on, actually go to one of the Android workshops, talk with the head of the hackathon, etc.

But alas, the result would have been the same regardless of anything. The judges delivered some rather questionable decisions in terms of which project was “better”; despite my utter failure, I placed third in “best software hack,” and my friend’s team, who had put all their efforts on a robotic hand, did not seem to place at all in the “best hardware hack.” What? At least there was an “everyone’s a winner” attitude which is a nice way to end a hackathon. No massive prizes for winners, like a graphics card or anything like that.

I don’t really know what to do now though; I left the hackathon with an incomplete satisfaction. What can I do instead: order parts for the electric bicycle? Compensate by trying to invite my friends to do something similar? Or just work on the school stuff I’m supposed to finish by the first day of school?

Ugh. I had felt during the hackathon that this was the beginning of my demise; that this was a glimpse of my condemnation; that I was no match for anyone around me in terms of college admissions. It’s not true. But one question still remains: what am I to do now?

Zero-gravity soccer – part 1

A few weeks ago I was assigned a final project. The final project could be anything as long as it’s written in Python. So I chose to make a game.

And so the mad scramble began. Actually, it wasn’t really a mad scramble at all. I took my time with the code, working on it only when I was able to do so. And so without the distractions of my brother, I was able to knock out 8 hours of coding today, which equates to 570 lines to check into source control.

Python is an incredibly addictive language. I thought it was just some simple language for kids; boy, was I wrong. It is a language of elegance, of minimalism. It makes Java look like a rusty pipe under a sink (which it is, for the most part). Say goodbye to curly braces and excess if statements. And bugs are incredibly easy to find, even without an IDE, if there are any in your code.

Python does have its shortcomings, however. Its object-oriented design isn’t exactly something familiar, and the mechanics of it are definitely not explicit. Still, it allows for multiple inheritance along with a degree of control you could never have with Java. In Java, you had to make a rigid model of the class before actually implementing it, and changing constructors around leads to problems down the line fairly quickly. In Python, however, you can build the implementation first, and then make an object encasing that behavior. It is purpose-driven rather than enterprise-driven, and so it works extremely well for small projects.

This is what I’ve been able to accomplish so far. I have until the 20th to “ship” the project, if you will, and I’m quite satisfied with the progress so far. I estimate it will only take 500-750 more lines to bring it to a playable state, but then again, I cannot make a fair estimate of line count because it’s not really what matters. I need to implement network, HUD, and some game-specific behaviors like grabbing the ball and throwing it to the goal.

I shall press forward…

The AoS bot that I originally wanted to make

crossposted from the BnS forums:

A few years back I thought of exactly this, a decent bot. At that time I did not have the experience to actually create AI but I started slowly working on a client implementation in JavaScript anyway. My imagination ran aloose: if I could make excellent AI, I could start a clan of “noobs”, watch everybody get rekt in League, and then reveal how I did it. Built-in inaccuracies and limitations would prevent any detection of ESP, because the AI would hypothetically only be able to view the map and within its line of sight, perform some limited communication, and have a reasonable (and adjustable) hit/miss percentage. The bot could also build prefabs, and other bots could join in or defend as they are building.

I thought about a design for such a system of bots for a very long time, and I looked at some dumb bots some people around here made in Python. But I wanted the “unauthorized” approach, so I was not inclined to create a serverside script. I wanted a clientside approach that would be controlled by the bot master’s computer, so that I could simply and inconspicuously deploy three or four well balanced bots on a server. The problem, of course, is that there is a limit for clients per IP address.

The problem in the holistic sense when I envisioned this concept was not the algorithmic portion as it was the implementation and the time that it would take to complete this undertaking, which I repeatedly underestimated. Moreover, I was not familiar with three-dimensional A* or machine learning concepts. I was eager to learn them, which is really what matters. But time always turned out to be the greatest deciding factor in all of this. My time was always fragmented; 15 minutes doing this, 15 minutes doing that, rather than a solid slot of 3 hours of project time. And due to school, my free time varies from 3 nice hours of relaxation to absolutely nothing.

Many of you think that this is my excuse all the time for not being able to do anything. But it’s true, and so my disposition to commit to things has fallen. That’s why I never carried out the whole bot thing in the first place. I just did the backend and that was it. You want to pressure me into doing it for that AoS of the Future project, fine. You want to pressure me into doing it so I can fill your servers with future-proofness, fine. You want me to do it for the betterment of all your little FPS projects, fine.

Suffice to say, a well scripted bot would be perfect in imperfection. But I am not one to do it.

My faith in humanity has been restored

Some-faith-in-humanity-has-been-restored

I don’t know how, I don’t know when, but I feel better now. Not sure for how long I’ll feel better, but certainly I feel somewhat more confident of my abilities now.

That said…

LameBoy

LameBoy debuggerIt’s going okay. Right now it’s clocking at about 5.5k lines of code, so evidently still in its infancy, but we are making progress fairly quickly. I had to break everything so that I could add a layer of abstraction. It’s just a matter of cleaning up Denton’s crappy code and future-proofing it.

(more…)

Amazing find!

So I reluctantly got an older version of Pokemon Type Wild and started playing it. I noticed that the music was sounding a bit funkier…. and I found that the music for the older versions for Type Wild were MIDIs!

I find MIDI to be a highly flexible format (you can make the instruments 8-bit, mash it up, …); as such, I cherish MIDI and especially VGMusic for its massive library of MIDIified (and original) game music.

This is a lucky discovery because then I can remake the MIDIs into nicer quality audio files for those who don’t have 500 MB worth of instrument data.