Review

Superintelligence

The most telling aspect of Superintelligence is the praise blurbs on the cover and back.

“Human civilisation is at stake” - Financial Times

“I highly recommend this book” - Bill Gates

I’m not sure what I’m supposed to feel, and it’s reflected in the general problems with the arguments in Superintelligence. Reading the book, you can quickly move from terrified by an idea to saying “huh, maybe” within the span of minutes. 

Superintelligence’s basic premise is that artificial intelligence may someday reach a point to be beyond human intelligence and most importantly beyond human control. What if this AI decides that humans are not necessary, a threat or composed reusable atoms it needs for its goals?

The author, Nick Bostrom of Oxford University’s Future of Humanities Institute, leads the reader toward the conclusion that this is indeed a very likely situation, whether through malice or ignorance of human value on behalf of this AI.

Bostrom’s chief concern is the possibility of constraining a superintelligent AI at least until we can properly trust that its activities would benefit mankind. It is the problem that is the most vague among many others: superintelligence’s motivations towards self-preservation, its ability to possibly control the world, and its ability to choose and refine goals. While all the issues are argued as inevitable given enough time, it is the “control problem” that can determine how destructive these other issues become. 

It is at this point that a further blurb about the book is necessary: “[Superintelligence] has, in places, the air of theology: great edifices of theory built on a tiny foundation of data.”

From The Telegraph, the review also argues that the book is a philosophical treatise and not a popular science book, which I would agree, and in most reactions I had when describing the book to friends, they tended to respond philosophically rather than from a technical perspective.

It is with this perspective that, Superintelligence applies a similar approach as did Daniel Dennett in Darwin’s Dangerous Idea - given enough time, anything is possible regardless of the mechanics.

The simple response is “Well, what if there isn’t enough time?”

This doesn’t suffice for Dennett’s argument (“The universe is this old, we see the complexity we do, therefore, enough time is at least this long, and we have no other data point to consider”), but it was a popular response to Superintelligence. I personally heard -  “We’ll kill each other before then.” - and - “We aren’t smart enough to do it.”

Both of these arguments, reflect the atheistic version of the faith The Telegraph suggests the reader needs, but Bostrom holds to throughout the book: given enough time, superintelligence will be all powerful and all knowing - near god-like except that it can move beyond the physical.

However, much like an atheist can withdraw value from the Gospels, even the unconvinced can remember a few sentences from Bostrom and take pause. Bostrom’s central concern is how to control technology, particularly technology that we and nobody else knows how it’s made. Moreover, this should be a concern even when programmers know how a program works, but the using public does not. It is the same concern that makes people assume nonchalantly that the government is already tracking their location and their information.

Even without superintelligence, the current conversation about technology is a shrug and admittance that that’s how it is. Bostrom leans heavily toward pacing ourselves rather than end up dead. Given our current acceptance of the undesirable in our iPhones, shouldn’t we also wonder if we should pace ourselves or pause and examine our current progress in detail rather than excitedly waiting for a new product?

This isn’t to say we should stop technological progress. Instead, alongside innovation, there needs to be analysis of every step.

Every wonder what’s in your OS’s source code? Could it be tracking you and logging every keystroke sent off to some database? What if all software was open source? Wouldn’t this solve that problem?

This isn’t a technological problem is it? The question of open source for everything is an economic and industrial question, though it may ultimately be solved by technology.

Consider that, in the last twenty years, restaurants and food producers have tied themselves not to simply producing food to eat, but the type and intent of the food they produce - is it sustainable? Is it safe for the environment? Does it reflect the locale? I imagine not too many people would be surprised to see a credo on a menu alongside the salads in this day.

What about software? Are we only to expect that kind of commitment from ex-hippies and brilliant libertarian hackers? What about Apple, Google and Microsoft? It’s an ideal certainly - once you show the Google search engine algorithm, then what’s left than for a competitor to copy it? I don’t have answer for this, but understand there is an exchange - Google keeps their competitive edge, but they also keep all my information.

We are already being victimized by unknown technology and we shrug or make some snarky comment. Even though Superintelligence argues that certain technology is inevitable, we can form how it is made.

Wouldn’t it be great if we started practicing that now?

You Must Beware of Shadows

The Eighth Commandment of The Little Schemer - use help functions to abstract from representations - is as obvious as most of the Ten Commandments. Of course you would use other functions to support or create abstraction.

To make the case more clearly, the authors attempt to represent primitive numbers with collections of empty lists ( e.g. (()) for one, (()()) for two). This is a new level of parenthetical obnoxiousness, and for a while, the reader may think - “Are they going to do this for the rest of the book?” - because the authors then go on to demonstrate that a lot of the API used thus far in the book, works for this type of representation. But, then there’s lat?

Testing reveals that lat? doesn’t work as expected with such an abstraction. The chapter concludes with the two speakers exchanging:

Is that bad?
You must beware of shadows

This isn’t a very common thing to read in an instructional programming book, hell, you’d be blown away to see this in a cookbook. 

Searching around online, I couldn’t find too many people fleshing out a very thorough explanation other than a couple chat groups and most users said “I guess they mean...but who cares?” or instead just complained about the book’s overall format. I know how I feel.

The phrase is out of place even in The Little Schemer, considering most chapters end with a recommendation to eat sweets. Nonetheless, it is a perfect for what the authors want the reader to consider.

Going back to the Eighth Commandment above, it’s a considerable summation of coding cleaning activities programmers can read up on in books such as Code Complete and Refactoring.

But why end the chapter like this and call the chapter "Shadows"?

It’s obviously a parental-level warning in the family of “Keep your head up.” While a programmer can abstract a lot, the taller an abstraction is built, the greater shadow it may be casting over operations that are still necessary or rarely necessary, which could even be more painful (read the first chapter of Release It!). The shadows cast by the edifice of your abstraction leads ultimately to bugs or worse a crack in the abstraction that can’t be patched.

It’s more delicate and literary warning than was given by Joel Spolsky about frameworks. Spolsky, as usual, is more confrontational, and aside from the possibility of him yelling at me about this topics, Schemers’ warning sticks better. It’s like a cautioning given by an old woman near a dark wood.

However, these shadows are not cast by some creepy tree, but by our own code. It’s ultimately an admonishment to test, check your abstractions in places you wouldn’t necessarily use them and be just as thorough as the creators of the primary methods of your language. And, of course, be afraid.

The Little Schemer

Thanks to Cool Pacific

The Little Schemer spells out in its introduction that the book is about recursion. Generally in programming circles, Schemer is known as a great book for learning about Scheme/Lisp (see Paul Graham for all the praise of Lisp you’ll need) or functional programming. While true, in my recent reading of the book about 20 years after its first edition, the popularity of Schemer is more important in how it presents these ideas then their practicality in code.

Topical Focus

I’ve said before that there needs to be more topically focused short consumable books for programmers, in contrast to giant tomes. It’s rarely that developers need an immense index of a language’s every aspect or need to know every algorithm, but instead they need specific cross-language experiences - algorithmic groups, object-oriented programming, and, as with Schemer, recursion, clocking in at under 190 pages.

The early Lisp families with their annoying parentheses can quickly cause someone new to the language to either give up or invent Haskell. But as this book proves, Scheme’s syntax isn’t the point. The topic is recursion and that’s it. Use a good IDE to help with your parentheses and move on and be done quickly.

Dialogue

Really, Schemer is a Socratic discourse between a very sharp neophyte and his guide. A very short question and answer format is maintained throughout, except for the sprinkling of recursive commandants and a handful of asides.

The format is a breather from syntactically dense books (here’s how you make variables, here’s you make arrays, classes, functions...250 pages later: You know X new JS framework), academically dense books and the “Let’s program to the Extreme with bad jokes” books.

Using this format, Schemer is as nuanced as it comes, often annoyingly so, as the authors walk through recursive functions a logic decision at a time. However, as laborious as this may be, it’s best to listen to the author’s recommendation to not rush your reading no more than you would a good conversation to experience this unique approach to programming pedagogy.

The Why of the How

A large intent of this Socratic method is to really get down to why a person makes the choices they do, which is a lot more interesting and demonstrates expertise. Take an example of asking an interviewee programmer to write out the Shell Sort on a whiteboard versus to have that same person walk you through a short array verbally using that algorithm, meanwhile explaining why it’s more efficient than insertion sort.

In a time when a common complaint is that the rush of new frameworks and languages is overwhelming and something employers except programmers to keep up with, the main question is how do I write something rather than how did the language arise in the way it did.

For its format and focus, The Little Schemer transcends the modern sense of programming instruction. It won’t be taught in a code bootcamp, because in the Schemer’s universe, coding bootcamps don’t exist because you’re not in a hurry to get a job, because there is no job to be had. Only understanding.

 

Joel on Software

I was introduced to Joel Spolsky’s writing a few years ago when I was learning how to interview and to run a software team. His most famous article, “The Joel Test: 12 Steps to Better Code” may appear pretty basic now, but all the things Joel was recommending were written in the year 2000 before Agile, Atlassian autobots, Chef / Puppet, Jenkins and Vagrant / Docker were considered essential to even a trivial software project.

I admire that a lot. But I wondered if there was any reason to still read Joel on Software, for the first time since its publication eleven years ago.

As it turns out, absolutely.

Early on “The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)” was a fortunate starter. While I knew what character sets were and have explained to students how a character eventually gets printed on your screen, when I started reading this article, I also had happened upon an annoying character map bug with my website. The article’s history and breakdown of how indecipherable characters appear in your emails, gave me a lot of background to know what the heck I was actually debugging the site.

“Painless Functional Specs” is an excellent pairing to the hundreds of pages of Agile methodology I have read, and that have become a generalized gospel - have faith in Scrum! I’ve learned that while, you emotionally should be flexible to change, use Kanban, take time to refactor, once projects reach a critical mass, you really do yourself a huge favor to have done a chunk of planning up front. In Joel’s belaboured case, specs are not sacrosanct, just starting points, and are what most folks today would consider paper prototypes. I have not done this for many spec projects, and I have a lot incomplete Github.

Finally, in “Don’t Let Architecture Astronauts Fool You,” Spolsky writes - “Remember that the architecture people are solving problems they think they can solve, not problems that are useful to solve.” In the age of exploding frameworks and write-code-without-coding Shangri-la’s, we are, to some degree, only reinforcing existing structures and problem approaches. Not really creating new solutions. Time savers like Rails, Angular, Unity and such are fantastic and a great entry point for those entering particular programming spaces. But as Joel points out - don’t expect a revolution because we’ve gone up one abstraction layer.

I’d add that, there’s tons of great historical information in the book that could only be written by someone living in those times contemporaneously, especially related to industry developments, old (bad) practices in software companies, and even a couple knocks at Duke Nukem Forever’s release cycle.

Ultimately, Spolsky comes off a big like the software version of Anthony Bourdain. Rough on the edges, highly critical, but overall has the best intentions for creating create software and improving people’s lives with software and work lives developing it.

I AM ERROR

So elegant

I’ve already written on the excellent Platform Series from MIT Press, and since discovering the series, the platform that I have been waiting for is the one that meant the most to me growing up – the Nintendo Entertainment System.

I AM ERROR by Nathan Altice finally dives deep into the hardware architecture and the expressive capabilities derived from the chips in the NES. I bought the book immediately, but surprisingly I wasn’t that engaged as I thought I would be.

ERROR is an excellent book, incredibly detailed, and well-researched with a breadth of topics from hardware development in Japan, assembly coding of music, and ROM hacking across the internet in the early 90s.

These topics aren’t why ERROR didn’t pull me in. Instead, it’s a victim of the series own success. Racing the Beam, the inaugural title from the series covering the Atari 2600, is also amazingly detailed particularly in the translation of games concepts through the examined hardware architecture to actual expressive gameplay.

ERROR mirrors this same level of translation, which is after all the intent of the series, and while the NES has a lot more going on, the ah-ha nature of the translation described above is much clearer in Beam and more accessible.

ERROR moves around a lot more, and specifically is concerned with the idea of “translation” given that the NES is also known in its home country of Japan as the Famicon. The book’s goal from the introduction is to determine how many perspectives on translation we can take when studying the NES, and it’s a brilliant thematic idea.

But, the hardware isn’t as interesting in this case if you have been following the series, and honestly, for a person of my acumen in the realms of PCB and chip technology, was a little over my head at times. That said, maybe one day I’ll appreciate it a lot more.

In contrast, two other titles in the series - The Future Was Here and Codename: Revolution - took their platforms, the Commodore Amiga and the Wii respectively, and while touching on the hardware, explored other aspects of expressive interaction with the machine beyond its hardware to a point where the hardware was only a stepping off point.

It’s like getting really into the details of how strings and pickups on a guitar interact, whilst most folks would only be concerned with “what does it play?”

Future primarily covered the expressive gaming and the demoscene provided by the new multimedia computer, and Revolution discussed what it meant for software and hardware to interact in a physical space. Sure, hardware had to be mentioned to start these conversations, but again, they are our baseline for further exploration of artistic and human ideas transformed into a digital medium. In this way, the books remind me a lot of 10 Print’s vignettes on maze generation code.

Fairly, I should backtrack on my criticism, which only comes from an embarrassment of riches. The intricacies required to program games cleverly on the NES are amazing, deserving a nod of respect to those developers, and are a rich primer on how graphics programming developed into a higher level of complexity than the Atari. Likewise, the chapter “2A03” on the sound chip architecture in the NES would be my first recommendation to anyone interested in sound chip programming and nice slice of humble pie for any contemporaries who currently do it with any degree of ego.

Finally, the chapter “Tool-Assisted,” while a titled after popular tool-assisted speedruns (and I’d note as well glitch fun), has a wonderful and well-explained history of hardware emulation, digging deep into IBMs history, that for even a software person not interested in games, is interesting on its own and of emergent importance for emulators used in production / development environments.

Overall, I’d recommend buying the book if you have been reading any of the Platform Series, as it is likely the most well-researched book as of yet, and if you have not read any of these titles and are not a hardware nut, reading Racing the Beam and I AM ERROR back-to-back would be excellent combination to get you started on that path.

Silicon Valley Season 2

Silicon Valley is funny. Mike Judge has a lot of cred in finding the absurd in the modern middle class and suburban, which as the shows executive producer, this style is expertly done with Valley. The show has always done a great job of actually telling jokes, and finding the humor from character motives rather than tacking what they could shoehorn in.

I really liked season one of this show, and I liked season two, which recently wrapped up its run on HBO.

Unfortunately, the show is already showing fatigue. The show’s plot revolves around the continual fight of Pied Piper versus the CEO of Hoolie attempting to gain control of Piper’s middle-out data compression algorithm through any means necessary as the staff of Piper try to secure funding to fully launch their product.

The problem with this setup is that it is completely episodic and random – "Oh look, Hooli is pulling some bullshit legal tactic, oh now they’re doing something else I already forgot about because it’s brushed off as soon as it happened, WHAT? our angel investor is a crazy person and now he’s doing weird eccentric thing X, which as with Hooli we’ll discard as a memory and plot thread when the credits roll."

It’s not a story that actually builds, and because of that, when the Piper crew eventually succeeds, I don’t really have a sense that they were up against much, but just annoyed throughout the season.

In contrast, season one actually built towards the Tech Crunch Disrupt, and involved more than outside annoyances - the relationships of the team, transition to a real company, and competition. A lot more intuitive, and as an audience member you can anticipate conflict and what you would expect from a growing company.

Which all of that totally sucks as this season actually had more heart.

Richard’s pep talk that the team was there and existed to “build epic shit” alongside Jared’s maudlin speech that he has had the best time being at Pied Piper even though it involved so much stress, which takes something really commodified, the start-up, and actually gives it some warmth.

There’s a lot of folks in Silicon Valley and here in Seattle less concerned about what they are passionate about, and more concerned with what will sell, and this is more insidious than some huge company like Hooli. It’s an internal and intentionally adopted destruction of one’s dreams, rather than an antagonistically driven defeat.

The Piper crew appears to genuinely want to do more for people, and the speeches delivered by Jared and Richard legitimize this drive contrasted excellently with Belson’s comment that “I don't want to live in a world where someone else is making the world a better place better than we are.”

I look forward to season three, but I hope that the humor and conflict can grow from these heartfelt motives in order to find insight within the laughs.

Cheers to small books

Recently I picked up Thomas Schwarzl’s 2D Game Collision Detection. It’s a couple years old, and I already have books that are much more thorough on game development such as Mathematics and Physics for Programmers and Game Physics Engine Development, but I wanted a book that was as stripped down and direct as possible on its topic, which I needed a little boning up on.

And indeed the book did exactly what it said it would do and no more, a result that a lot of programming books in general fail to deliver on. Simple question - how do I solve the tunneling problem in collision detection? For Schwarzl it’s a couple pages, and for a lot of books their thoroughness in answering this programming problem can become less of a technical issue and more of a readership problem.

While I wouldn’t knock books attempting to provide a lot of knowledge for my page-buying buck, I would say instead that there is a space missing in programming literature, which is the small book, and in particular, the small specialized book.

I have a couple of algorithm books on my shelves (well, in stacks on my floor), but some of these, while serving as rich collegiate textbooks aren’t anything that can really engage the beginner or improve the intermediate developer.

By all means, I won’t return Sedgewick’s Algorithms, however, I would like to see books along the lines of 4 Sorting Algorithms that barely trump 90 pages. Books of this variety wouldn’t be intended to be the end of all of study, but could provide a low cost of entry into a new subject without intimidation, high cost or excessive detail.

Sometimes, you just want to be told in a few sentences what a bubble sort is without, for the moment, worrying about its O-notation compared to other algorithms. Similar series elsewhere - A Very Short Introduction, How to Read…, and every book on meditation - demonstrate the practicality and demand for reading of this type.

The You Don’t Know series focused on JavaScript from Kyl Simpson is a perfect example - meeting all these criteria: short, detailed in a specialized topic and not too much extraneous information. Want to know about closures thoroughly but JUST closures, well Simpson’s written that book.

As literacy in programming expands, I do expect that these types of books will appear more consistently and we’ll see a move away from textbook or language survey tomes and more about how to use languages involving very specific topics in a way that readers can quickly consume and then apply their understanding.

Course I guess I could start writing them myself...

Who started it?

In the final chapter of Steve Jobs, author Walter Isaacson assembles a bulleted list of Jobs’ greatest career contributions – the iPhone, Pixar, the whole App ecosystem – and of course the Macintosh, which Isaacson describes as: “[it] begat the home computer revolution and popularized the graphical user interface.”

Brian Bagnall, author of Commodore: a Company on the Edge, would beg to differ:

“[The] rosy picture of Apple starting the microcomputer industry crumbles under inspection,” he writes in the introduction to his book, “Commodore put computers in the hands of ordinary consumers.

Bagnall replied with his own list of what he believes were Commodore’s successes – first to sell a million computers, first major company to show a personal computer, and the first to release a multimedia computer.

In this way, Bagnall’s book begins as a direct challenge to Isaacson’s book, but aside from this opening salvo and fighting over turf, the books are excellent compliments to each other.

While Isaacson’s book is not strictly about Apple the company and Bagnall’s book is about the Commodore as a whole, both books have a lot in common - each author had a tremendous amount of access to the people involved in the history of these companies and saturated their book with quotes and firsthand sources, and both are very concerned with and detailed about late 70s and 80s computer history.

Commodore, who closed their doors in 1994, was the progenitor of the Commodore 64 and the manufacturer of the Amiga. Commodore has become emblematic of the shifting sands in the computer industry in the transition from the 80s to the 90s. Commodore had money, had technology, had vertical integration (much emphasized by Jobs in his biography) and yet, they couldn’t survive. While Bagnall has yet to publish a long-awaited follow-up to Commodore, The Future Was Here by Jimmy Maher, a book profiling of the Amiga, explains that Commodore just simply didn’t know what to do with their computers or how to market them as multimedia became a requirement of consumers. However, Jobs certainly knew how to market his machines.

Bagnall’s book makes a thorough and persuasive presentation of the contributions of Commodore and notably its technological heart Chuck Peddle. You can’t read Bagnall’s book and then look at Isaacson’s bullet point list of Job’s accomplishments without thinking “Well…..”

Unfortunately for Bagnall’s subject, history is written by the victors. Or rather, fans of the victors it seems.

This collision of these differing authors is central to a lot of what is currently being written about personal computing history. There’s a plethora of books available now that attempt to state who invented the computer, or who sparked the revolution (since it needs to be called that for whatever reason), or what influenced the revolution. Ultimately boiling down to the question - who was the first, such that they deserve credit?

Commodore, to its credit, is getting recognition - the documentary From Bedrooms to Billions, the aforementioned The Future Was Here, and the tremendous amount of retro computing interest has brought the company’s technology back into popular interest. Commodore for many of us, does sit as a time capsule of that era, perceiving its products unaltered by its present form, as is the case with Apple.

For myself and my near-peers, computing history is not just industry on its own merits, but also valuable because it is our personal history as well. We experienced that history, and many of the details filled in by books like Isaacson and Bagnall’s are enriching simply because we can say to ourselves “Oh yeah, I do remember that.” It helps us to explain and understanding the evolution of the digital world, one in which, we directly inhabit and is still quite new.

However, I don’t care who started it. Computer history is always a tremendous confluence of factors that assigning historical responsibility is a pointless task. Oh absolutely, individual people had tremendous impact or influence, but I can’t say Alan Turing started the computer revolution more than I can say Morris Tanenbaum did as well. In fact, it is the combining factors that makes the field of study so fascinating. The iPhone wouldn’t be half as interesting if we didn’t have social outlets such that we always had a reason to be engaged with and notified by our phones.

I commend Isaacson and Bagnall on their enormous efforts to document the history of personal computing and choosing such large subject matters and persons of importance. While I agree that Commodore has indeed been downplayed as Bagnall claims, the whole problem isn’t that Commodore isn’t getting its just desserts from history. Instead, the motivation to claim ownership of the digital age is going to be there regardless of history, especially when money, power and success are intertwined.

Game Programming Patterns

Most game programming books are one of two things: very specific – particular game engines, physics, rendering, AI – or very general, for the beginner.

Robert Nystrom’s Game Programming Patterns is right in the middle.

As the title indicates, the book doesn’t focus on a particular engine, individual component or language, though examples are given in a stripped down C++. Instead, Nystrom’s book takes its cue from the Gang of Four’s Design Patterns - presenting a problem, then a pattern that attempts to resolve complexity with higher aspirations of reusability and performance, both for the computer and the programmer.

GPP is an excellent companion to its inspiration for even the general design pattern enthusiasts. Games, at this point in time, are something most programmers have some level if not expert knowledge in, even if that’s only as a player. Framing design patterns within this context allows Nystrom to provide examples that are relatable and can move away from the Gang of Four’s need to be as neutral as possible when outlining their ideas. In other words - it clicks better because the content is very close to our experience.

The book is organized around groups of patterns and most of the chapters are focused on problems that are unique to game development.

The first section revisits several of the Gang of Four’s canonized patterns – command, flyweight, observer, prototype, singleton, state – even taking a number of them to task as misused, bad or at the very least poorly described by the original authors. Nystrom certainly could have chosen other patterns (composite, builder and visitor spring to mind), in fact, it’d be awesome if he extended the series, but he had to find a limit someplace.

The other sections dive more deeply into game programming specifics including sequencing, behavioral structures, decoupling, and optimization, but the descriptions still keep the discussion on the pattern level. These chapters cover problems that specifically arise in game development, as opposed to say web development, and Nystrom is even more strong-handed in the tone - use the pattern when you need it, not just because.

The patterns discussed near the end of the book specifically belabour this point, as their optimizations aim really make code more complex and difficult to debug, but can greatly affect performance for games that need it.

Overall, Nystrom’s book was a like a great pair programming buddy. He presents most patterns in the book initially as “hey, let’s just get it working”, then through the introduction of a pattern, refactors, and then asks question of “do we really need this?”, all the while with asides and geek notes in the sidebars.

While this information can come from other places targeted at game developers, Nystrom’s a good writer. I mentioned above that this book fits in the middle, in truth it probably does lean more towards the beginner, that said, the value of the book for the more experienced is to gain another perspective on patterns and code practices from an author who delights in it.

Game Programming Patterns is indeed useful for its titular topic, but as a whole, it made programming, patterns used or not, just fun.

Atari: Game Over

Currently on Netflix, the documentary Atari: Game Over does one thing very well - makes you very endeared to Howard Scott Warshaw.

Warshaw is the game designer and developer for the Atari 2600 game E.T. based on the film of the same name, that in 1983, supposedly killed the video game industry and thus, millions of copies of its cartridges were shamefully buried in the desert. 

This documentary is the story of the filmmaker's, Zak Penn, attempt to find those cartridges and to answer the question - What happened to Atari? 

Like a lot of recent video game documentaries, the film begins with some variant of "Now, video games are everywhere." This is probably an unnecessary line, considering the audience for this film has to be specialized enough to care about buried Atari cartridges but let's move past this.

The film uses Warshaw as the central character around the history of late 70s and early 80s video game development, explosion and subsequent downfall. Warshaw, who is now a therapist, had, before creating E.T., developed classic games like Yars Revenge and the adaptation of Raiders of the Lost Ark to Atari. Warshaw is naturally a part of this arc of history and admittedly realizes that he tried for decades to find the same high as he felt as a young man on the wave of the video game craze. 

While Game Over does the perfunctory history of the history of Atari and interviews with Nolan Bushnell, what it does so much better than its peer documentaries is tie it to Warshaw, who is person that the audience can actually connect to. In contrast Video Games: The Movie has a group of young people talking about how "AWESOME" Atari was, which may be fun for them, but when sitting in movie and asking yourself 'why should I care?' Game Over answers with Warshaw's experience and life. 

The climax of the film, not surprisingly, is the big dig, where Atari cartridges are discovered in a land fill. People come out in droves to see the dig and a cheer goes up as the old games are found. Ernest Cline, author of Ready Player One, is there too. He makes an appearance for several scenes throughout in a DeLorean from George RR Martin for no reason other than I can assume to throw some nerd-credibility in there, cause seriously, he serves no plot or historically illuminative purpose. But, really, his presence is a lot like the cheer as the games are found.

I am a game and vintage computer collector myself. I don't have a tremendous amount of money, so often I content myself with old programming books from the 70s. I own three Atari's including a Sears Atari. I own multiple copies of E.T. and my most rare game is Swordquest: Waterworld. I get excited about old gaming stuff, but not enough to break my bank. 

Nothing about the Atari dig excites me. As the different people at the end of the film explain, hating E.T. has become fashionable. There are far worse games on the Atari 2600, even the Atari 5200 in its entirety is terrible. The dig and its excitement in the film are Internet Exciting not actually exciting on a gaming level. That strikes me as a little hollow, like just throwing a bunch of retro gaming t-shirts on a character in a film and calling them a "gamer." To any screenwriters out there looking to exploit this market, I recommend you use Ernest Cline. 

I don't doubt anyone's authenticity, but self-congratulation and masturbation in gaming docs is horrendous, and it's the only part of the film that starts to dip into this realm. Fortunately, Warshaw carries it past this. 

Warshaw, having been demonized for his creation of E.T., is understandably touched by everyone's involvement, hardwork and enthusiasm. This is actually moving. It's validation to a person that his who had to give up a career he was good at and loved. As the film notes, there are no lifetime achievement awards for Warshaw. 

I didn't obtain any new information from this film, but it did make me like Warshaw more and that's for a guy who is already pretty likeable. If you haven't seen his series Once Upon Atari it's one of the more in-depth "documentaries" on gaming and programming history I've seen.

Pages

Subscribe to Review