3 Game Books I Just Finished

The Platform Series from MIT Press is something that I've dreamt about but never had the specific description to really define what I was looking for nor the expertise to deliver it myself. As the name implies, each book in the series covers a particular platform, as of the time of this post: Atari VCS (the 2600), Nintendo Wii and the Commodore Amiga. Considering the nature of the publisher, the books are not maudlin non-sensical nostalgia-gasms that some other companies (BOSS FIGHT BOOKS) have published on video games. 

See, unlike most of what's written about video games, which seems to center on the idea that "video games are popular and have gained credibility", the platform series treats the individual platforms, their games and social cultures adjacent as distinct subjects that they genuinely believe have something to reveal about the nature of play, simulation, technology, economics, and all the rest of those academic subjects. 

Generics on the series as a whole aside, each book, which I loved each of them, has their own gems and rather than summarize in their entirety, and they should be read as such, I'd like to hit on what stood out to me for each one

 

Racing the Beam 

I have a fascination with the Atari 2600 that a lot of my generation shares. I didn't grow up playing it exclusively (that was the NES), but I did have one in my home for a short while, and I've always loved the charming simplicity of its games, its hellish sound and the legacy as a business Atari laid (by the way, if you haven't seen Once Upon Atari, please do). I'm a collector of Atari stuff, and if my income and wife were a bit more open, I'd probably have near a full collection at this point. 

Needless to say, it's why I bought the book. The best section, and most important to the rest of the arguments in the book, is the long and thorough discussion of the different processors and memory locations of the VCS's hardware. Here you learn about how racing the beam actually worked (the beam that drew the CRT TV images), hardcoded sprites, memory for sprites, and the 6507 all worked together. For one, it's nice to have a set of hardware that you can really wrap your head around. Again, charming simplicity. But more importantly, as the book progresses through the games "Adventure" and "Pac-Man", it delivers detailed discussion of how these games were crafted using the chipset available. 

You can't understand why "Pac-Man" on Atari sucked unless you know its chipset, and you can't appreciated how clever "Adventure" was without understand how sprites were kept in memory. The book accomplishes in this long section and subsequent chapters what it sets out to do in the introduction - show how creativity intersected with hardware, and how hardware formed games. It's really something we'd rather ignore now with such an obscene amount of memory and power, but the material does affect the medium and knowing its strengths and weakness helps creative professionals know where to cut and what to push. 

The Future Was Here

I found this book to be by far the most fascinating. However, I don't think there's anything revolutionary in the author's discussion of the Amiga and its software. It's simply the best book I've ever read on a single computer and its history. Well researched, covering every aspect of the system the author's could find including The Bard's Tale, the boing demo, the Demoscene, digital music / video, and the multitasking operating system. 

Putting this book down, you walk away with such an appreciation for the folks who built the Amiga, and those who used what would now be considered, pretty humble hardware, to build unheard of digital expressions. It's one of those moments of understanding - yes computers are about automation, but truly, and particularly in the demoscene, they are really about expression and hard fought at that. 

Codename: Revolution

I am not a Wii player. I don't own one, I've played it some and for the most part, I've found it interesting, but never enough to purchase one. Similar to Racing the Beam, the book's author target the intersection of the technology of the platform including its all important peripherals in order to draw a point about how expression changes with a reduced processing capacity, the sensitivity of the controller, and the player's physical game space. 

I enjoyed the discussion on the design of Mii characters, which led to a tangent on the design of Nintendo's early character designs (Mouths are hard to draw, cool give Mario a mustache), but it's not really the power of the book, which comes in the final chapter. After chapters on how the different aspects of the Wii had impacted game creators and player interaction with the system, the author's took on the follow-up competitors to the Wii, namely the Kinect. 

Not to say they are outright bias, but it's apparent the authors are definitely not fans. They back it up - while the Kinect was supposed to submerge the player into the gaming environment, the author's point out that players generally don't want to be completely subsumed by the gaming environment. Instead, they want a cybernetic environment. Case in point - Wii Bowling. While it may be more "freeing" to not have a controller with the Kinect, having a button, the player could more accurately control release of the ball. 

Having personally had to do non-game programming with the Kinect, I found that trying to do accurate things other than jump and lean, were outside of its capabilities and prevented the user from receiving good feedback as to whether their actions were received. Keys at the very least click. 

As I said, I'm not a Wii fanboy by any means, but its an excellent defense of where the Wii succeeded and the Kinect failed. Yes, Wii needed a better library and wasn't as powerful, but when it used the motion, it used it right - the motion gave you the control you wanted rather than forcing you to substitute physically jumping for pressing the "A" button (not true for all the titles to be fair).

While I don't have a stake in either camp, the final chapters made me actually care about how 3D motion was used in games and that is demonstrative of where the Platform series excels - they teach you and carry you through history and technology to help you understand game expression to the point where, even if you aren't playing said platforms, you're still enjoying that they're out there. 

3 Game Programming Books I Will Never Use

I so wanted these books as a teenager. Unfortunately, my parent's computer wouldn't have run either Java or DirectX to the requirements, but I guess I figured if I had them, I could figure out a way. Still they filled my head with the games I would've created had, which were mainly slight variations on other popular games, ideas I had no materials for whatsoever (such as the popular CD-ROM games of the time), or things that were the video game equivalent of a screenplay about "two friends go on a road trip", usually RPGs in some dystopia (thanks Final Fantasy VII) or something that the computer wouldn't have the memory for.

Either way, it was worth the $4.00 total to purchase for the memories. Also, gotta love those covers.

Verifying Non-Risk Status

I'm very interested in trying to find ways to distribute helping people. Not in micro-donations, but more that if there's someone in my neighborhood that is in need - the elderly, someone going through some home issues, lacking food - I'd love to be able to pull that up on my phone, see the needs and help where I could. Likewise, with enough scale, I think that a lot of simple acts of kindness could occur using the Internet for people to send alerts and then people to volunteer for quick jobs. Sort of like how you already most likely help your friends and family. 

However, it only takes one runaway to be assaulted or one soup kitchen to have massive amounts of food poisoning, and the whole things comes crashing down. While my original thoughts in this matter were toward the latter - how can I get it so in-home prepared food is safe for distribution - the former is much more serious than getting around the legal pains of the health department. 

There are a couple options to verify that we're working with people that have the best intentions and we can make sure they execute their help correctly.

First, is the system we have now, which is essentially people vouch for other people. This is what friends do, and friends of friends when you need help after your car breaks down. But how much vouch is needed and who provides it. Essentially, a seller rating system like on eBay, this doesn't mean that the vouchers don't also have a low bar of responsibility such that the quality of folks is inflated. This isn't necessarily meaning that anyone is doing anything malicious after all - it's just they may not know how to cook good meals for the elderly. 

The second is another system we have now, which I'll casually call the "church" system. We have some central authority that is essentially responsible for the activities of its members and they have to be added and approved by those members. I like this system a bit more, but if we're using say a cloud service to support these communities, how do we then go and vet those communities. This solutions seems to lean itself towards a more distributed service, where, if a "church" wanted to get this up and running they could, but whoever was writing the software would just sorta put it out there and hope for the best.

I'm okay with this idea, and it is making giving communities more efficient, but it's not doing what I hoped the system could do, which is to get folks everywhere to find small ways they could help their community with immediacy and in a less bounded context. Even if you're in a giving community, they may not be aware that someone 1 block from your house is in need, so there's some pre-/self-selection in those activities, which goes back to the point above, that's it's really about efficiency then. 

At this point I really don't have an answer, but I hope putting it out there, may, for the non-existent readers of this website inspire some things towards a safer and more assured way to help those in need.  

Making Everything Public

This week I decide to open source all my software on my github account. It's not much, and it's not all there yet. Still cleaning up a few projects. Most of the projcts on there were an hour of goofing off or so more than anything serious, but they will be shortly.

I decided to do this for a couple reasons.

First, as I reviewed candidates at my company, I spent a fair amount of time critiquing their GitHub projects. This gives me an insight into their skill on a couple points - I can see how they code and I can see what they're into. But I felt it was a little unfair that I would also deny that accessibility for the candidates scrutinizing myself. What kind of coder is their future employer? What does he like to release?

Second is the issue of release. By putting software out into the world, you are actually committing to some level of imperative and quality. You start to look at your software differently knowing that other people could critique it, judge it, mock it, reuse it, whatever. 

Third, sharing is just a flat good habit to get into. Even if no one ever looks at your code, most likely the case in my situation, it's going to make it a little easier when you don't want to share code with people. You'll be less shy, you'll be more open, and you'll expect feedback regardless. 

Likewise, connected to most of the projects is a public Kanban board on Pivotal. This might be a bit of overkill, but it provides a further level of sharing and exposure to the world. Call me crazy, but someone might actually want to participate in one of my projects at some point, and to look over an organized board of all the issues, shows them you're on top of the state of the software and ready to share development. For candidates, applying to jobs, I'd just simply be super impressed if they did so since it mirrors a professional production environment. If they can run that and motivate themselves on their own, I'd have no trouble knowing they could do so at my company. 

 

Code Fetishism is Bad

I hate this shit:

This shit:

Oh and this shit:

Granted - I get why these scenes are in television and movies. It takes a dry subject and makes it look a little more interesting than not at all. Furthermore, technology always carries a little bit of mystique with it. That's fair, and I don't think movies should stop doing it. I'd hate to have to watch some hacker character run Linux updates when they could have cool graphs and code moving on screen. 

But I can't get past it. There's three main issues I have with them and all of them undermine actually people becoming awesome superhackers: 

Confusion about knowledge to action. There is a lot of stuff you need to do to make code work, even hacked together code. There's a lot of support programs, technical manuals and such that you have to slog through. Hey, but we got a goal so it's worth the sacrifice. Sure, movies set up false expectations about the work involved in everything, but there's not even a training montage in computer movies. 

Focus on self satisfaction. It's like watching the Food Network - chopping potatoes is an exquisite experience, but satisfaction in the actual cooking, at least for me, doesn't come from softly smelling rosemary fresh cut, but from the assemblage of everything. In The Social Network in particular, the slow and overly dramatic drawing of the algorithm, while very cool, is not the point of what Zuckerberg is even doing in the scene. It's pointlessly indulgent, and therefore a waster of any decent coder's time to revel in such things. 

Confusion between what's in great use vs what's in actually reflective or meaningful. Maybe this is just a hole in the market for movies where people actually withdraw meaning from interacting with computers and code, but all of this flash, unrealistic flash at that, ego focused flash, visualizes excellence within computing and even hacking, it such a false and bullshit way, that it distorts what's substantive. 

Each one of those issues, ultimately deters people from computing, as they completely misdirect its value and confuse where you find meaning therein. It's a false advertisement. 

I suppose people can look past it, and perhaps these are entry points, but the fetishism is ultimately abandoned nonetheless. 

Two Microwaves

I have two microwaves in my life that have design features I cannot understand.

The first, pictured above, is at my house. For some reason, you have to press "Time Cook" to get it to run through a timer. I understand the manufacturers were trying to create a distinction between timed and auto cook, but honestly, shouldn't time cook be the default? And then, whatever else can just be a special button. 

The second is at my office, and it beeps its full cycle of end beeps regardless of whether you've opened the door. What is the point of that? Yes, I get that it's an extra check for the manufacturer to test if the door is open. But pissing off your users is probably not the best alternative feature.

Also, I want to know what possessed the person above to post that video. 

Four Documentaries on Arcade Gaming and why most documentaries are boring recently

Couple quick thoughts on this group of recent nostalgic bouts:

Chasing Ghosts

With the most history and interviews, this is a good compendium of the rise of the arcade period in the early / mid-eighties. Sadly, it's not really interesting and there's nothing really to draw from knowing this information, or at least something the filmmakers want you to take away. The film has no key tension, other than an arcade champions reunion, which the filmmakers never underscore why this was the central crux of the movie, other than it's something that actually happened presently. 

High Score

The worst amongst the movies, the film follows a gamer attempting to break the Missle Command high score. While it's a good objective goal for the main character, it doesn't really have any actual impact on the film. It's just, again, something that happens. Video game records are usually video recorded and mailed in, so the attempts themselves are not very dramatic, and more importantly, achieving Missle Command high scores is an endurance challenge rather than a skill excellence. Since the most exciting event would be somebody to stay up for 80 hours playing video games, it makes this move really slow and ultimately says very little about gaming.

King of Kong

Another record attempt, in this case Donkey Kong high score, held at the time of the film by Billy Mitchell. Mitchell is the best arcade gamer of all time, and has an amazingly arrogant and proud personality, so this film, while the record is only a little interesting, is fun to watch because there's actually an antagonist to the main character's goal that registers with the audience. Doesn't say much about the game or gaming generally, but Mitchell's personality makes the film worth it. 

Space Invaders

Best of the bunch - Space Invaders documents several very elaborate personal arcade collections, delves into the history of arcade gaming and the video game hysteria in the 80s, and tries to get an understanding of why people would devote so much to such large, old fashioned games. Ultimately that answer is mainly nostalgia, and touching of one's youth, but the filmmakers discuss collecting more generally and what it is to maintain something that is horribly out of date. 

 

Most of these movies are bad. Not just low budget, but boring, dull and wandering. The problem in the ones above and in a lot of low-budget documentaries coming out is the miss the point of making a film - to make us feel something and connect with a different world, just like a fictional movie. Instead, most of these new documentaries are things just happening that are being filmed.

By far the worst I've seen is SOMM on a group of men trying to attain the highest rank of sommelier in the US. The movie just thinks that you can show peopel drinking fancy wines and somehow I'll care, because, you know, I like wine. However, even though it's a rigid test, the film never really sets up the conflict of passing it. They just create a mystique that it's hard, but I don't know why it should be that hard (say, if you watched a movie on astroanut training you'd understand why the bar is so high) and why I should care that people can pass the test. 

Likewise with the gaming films above, the filmmakers appear to have thought - hey, here's a gaming thing, let's shoot reels, throw in a couple transitions of close ups of joysticks and old game art, and we got a film. I'd like to say I'm insulted, but in truth, I figure the intentions of the documentarians were pure, it's just that the material is not all that exciting

Video game high scores just aren't that interesting to modern gamers nor are they visually very interesting. Watch some bad ass on Call of Duty pwn and you'll at least see something visually cool, but Donkey Kong, whether the first level or the last screen, still pretty much looks the same at every point in the game. Score are also very antithetical to the intent of the modern gaming community to be more open and inviting rather than ultra competitive. High scores breed people like Billy Mitchell, who don't make me want to play video games. This doesn't mean there aren't competitive people, it just means, the epitome of a great gamer is not necessarily his/her competitiveness. In fact, in games like Minecraft, it may be the exact opposite.

 

 

 

IE11 Reader View

A couple weeks ago I wrote about a number of things in Internet Explorer 11 that I had to learn. Well, since then I've learned a couple more things. 

First off, in reader view, if you have the skip ahead featured enabled, Reader view will actually skip ahead for you and preload the pages in reader view as well. This can be a bit problematic as you may have multipage articles, but you may also have skip ahead to unrelated articles as well, which could be disorienting. But you may want both features implemented at the same time. 

Well, I tried a couple of different approaches to make this happen. 

First, I tried to hook into the event that fired when you pulled up reader view. While, IE11 has a bunch of events tied to pinning sites, I couldn't find any events that fired when launching reader view. They might be out there, so if that's the case, I'd love to hear it. 

So I tried onfocus or onblur events, which weren't firing at the right time. I was hoping that I could follow these events and then remove the rel="next" attributes so the next page fo reader view wouldn't load. I noticed even on sites like MSN that reader view actually removed the ability to swipe ahead anyhow.

Second, I looked around and found a Stack Overflow article that mentioned that <pre> tags prevented reader view, saying this was a bug. I disagree, I think this is intentional as it completely ruins the point of <pre>. Regardless, this led me to realize that if I had a <pre> tag,  I could prevent reader view from working when the reader looked ahead at the next page. Thus, I added a <pre> tag to every page. 

This doesn't break reader view if I remove the <pre> tag as the page loads using JS.

So now I have reader view for the page I'm on, but because the <pre> tag on every page I'm not on prevents reader view, I still get swipe ahead without getting look ahead reader view. 

 

When web dev goes political

The release of HealthCare.gov and its subsequent failure is not something that most web programmers would be very surprised about. 

This has nothing to do with 500 million lines of code, which is bull anyhow, but instead has to do with the organization of any project of the scale, and what most people would anticipate how the government runs code projects.

There are probably some groups within the government, particularly within the military, that run awesome and would put most private enterprise groups (including my own) to absolute shame. However, with HC.gov, we knew that we were dealing with a new team, new objectives, in an untested user market. So that team was going to be green in their field regardless. 

Furthermore, we knew that this green team was going to have to scale up immediately. Consider a large application like Facebook. Its user base did not show up on day one. It's feature set, for all it's photo and tagging capabilities was not even close to what it was in the beginning. Facebook was something that a motivated developer could design and test with a small group, making incremental improvements as the software was used.

Not so, for HC.gov - everything had to work day one. What happens if you need to udpate the system? You can't - it's just gotta work. 

This is not how most web applications are developed. This is how a lot of desktop applications are developed - basically, choosing what bugs to ship with. And this makes fixing the problem, "A website should just work" a real problem, because there is no version 2.0. This is now and now it should fucking work.

What those of us in this industry could not have known is another most common problem of green teams - a lack of testing. 

As so many books on testing will tell you, you are not down with something unless it passes a test. I would say "the home page loads" or "A new user can sign up for insurance" would be a pretty big deal of a test to pass. A beginner tester might note that sign-up is not something that would ever be one test, and that's more to the point - the failure of that intergration is obviously something that would have most likely thousands of smaller tests behind it. You wouldn't even get the chance to test that larger one if those other tests didn't pass. So yeah, obviously someone skimped somewhere.

And take note, none of this has to do with benchmarking tests or the like. But if processing applications was just hung up by performance, we could run that by hand at 3am and gain success. And fixing it would be like AOL - the government would spin up more distributions and db backups, something pretty trivial at this point in time. To my knowledge, that isn't the issue.

Most often when you're trying to trim costs, TDD and adequate QA is not done. Therefore, it would be no surprise that this is where fat was cut, but they weren't cutting fat, they were cutting meat. 

I'm not a behind-the-scenes coder who could actually verify these issues, but it doesn't take a big leap to guess and take heed of the troubles a disaster of this magnitude demonstrates. The President looks incompetent and his opponents look spot on correct in that the government is incapable of providing healthcare. 

The truth is that the last point has not even been demonstrated - the government and the Obama administration just didn't produce the software to provide healthcare, and really it's gotten to the point it really makes no difference. 

 

 

 

Pages