#!: Predictable Randomness

My algorithmic poetry

Nick Montfort released two books in 2014 that deal with randomness and expression. The first, 10 Print, I reviewed a couple years back and found it to be somewhat indulgent but ultimately worth the read, and I recently got around to his second of that year #! (Shebang).

In a similar style to 10 Print, #! uses a variety of algorithms to produce poetry without human interaction, showing the code at the beginning of each chapter and the resulting poetry following. Many of the contributors state that they groomed the produced text for the best bits, but I wouldn’t say that’s any different than using a maze building algorithm and selecting the best results for a game’s levels. Either way, the algorithm still produced the original content.

So let’s get to the obvious question - Is machine poetry any good?

Well, not really. I imagine that’s not too surprising of an answer. Many of the poems are not so much poems, but patterns of letters as in “All the Names of God,” which slowly builds up longer combinations of letters by cycling through the alphabet, and “Alphabet Expanding,” which is exactly what it sounds like: the alphabet written repeatedly with each loop increasing the space between the letters.

Aside from these pattern generators, many of the poems use a bank of phrases or letter groups that are chosen mostly at random to create semi-coherent poems. However, the phrases and groups have been so selected because they’d sound poetic in just about any combination.

Next obvious question then is - At the very least is it interesting?

From a technical standpoint, no. For an intermediate programmer, it may be interesting to see how they could code bits of programs dealing with randomness or pattern making in a succinct way, which was part of Montfort’s argument for the elegance found in 10 Print.

However, what 10 Print did well was allow its writers to imbue the mazes with meaning in different context whether that was artistic, ludic, or technical. Here, the proof is on the page. Do you enjoy what you are reading? And unfortunately, the answer is mostly no.

#! at the very least is neat looking in certain sections and will be a great book on my shelf to confuse my kids when they’re young. Thematically, it fits in perfectly with Montfort’s other work on creativity, expressiveness, and processing, so it is no surprise he took a chance with this topic.

Unfortunately, the book’s contributors didn’t do much in terms of varying their algorithms, each trying to make a simple algorithm that was roughly half a page down to one line depending on their text bank.

The approach feels lazy as there isn’t much to explore on the topic of procedurally generated poem, because the procedure is basically the same every time with a little bit of flair thrown in here and there.

This is definitely a topic I imagine Montfort and others will circle back on as time goes on, but I hope when #! 2: Electr1c Boogal0o comes out, there's a chance to see different approaches to procedural literature beyond nested loops with calls to a rand() function.

Flash: an Unintended Eulogy

On July 25, 2017 Adobe announced they were ending development for the Flash platform in 2020. Three years earlier when Anastasia Salter and John Murray wrote Flash: Building the Interactive Web, they concluded that while the Flash platform was seeing a descent, the journalistic eulogizing of the platform was undeserved as while “Flash may die someday...the web will be resplendent in its progeny.”

Despite their conclusion, I think Flash deserves a proper eulogy, and this book is it.

As another excellent edition from MIT Press’ Platform Series, throughout Salter and Murray have to contend with what a lot of people’s last memory of Flash has been - the divisive fight, picked by Apple, led by Steve Jobs, who described Flash as a platform that “falls short” in the mobile era. Comments like this from Jobs will most likely be considered the most historically damning and the source of many of the immediate eulogies that circulated on the web in 2010. However, Apple’s damnation, Salter and Murray point out, also drew out the problems with Apple’s advocacy for its own app model and the difficulties facing open web standards.

“While Flash’s marketplace was completely free, without any intervention by Adobe...Apple had a different version for the web,” they write with a small feeling of the annoyance that plenty of app developers have felt since the opening of the App store with its myriad of rules and regulations, that are often more about content management than any technical necessity, most notably with the banned app Phone Story, a casual game that had players saving suicide victims at Apple factories and brutalizing children to dig for minerals that are used in chip manufacturing.

Even in regards to open web standards, anyone over thirty most likely remembers “the browser wars”, wherein the compatibility of web standards was sent off the rails as Mozilla, WC3, and Microsoft viad for being the standard's leader, a battle that has left scars in the landscape web developers still feel today.

The authors argue that in the midst of the wars and the development of the app marketplaces, the centralized nature of Flash allowed it to have consistent compatibility (even today Flash has backwards compatibility into to the 90s) and the platform’s open distribution model was “essential to free expression.”

While a nice snub on Flash’s critics, throughout the chapters it's clear that Flash has had a scattered nature - from its development history to the quality of content put out by its users - which ultimately made it brilliant as a creative platform. Scattered may sound disparaging, but I don’t mean it to sound so.

With Flash you could publish anything, amateurish or professional, and it was accessible on the majority of browsers. In this way, artists and developers were able to experiment with short films, design, video games, narration, UI and any other interactive environment they could dream up. NewGrounds is repeatedly pointed to as a hub of experimentation. In these spaces, people could do what often was missing in media production - doing sketches and traces. This is how people get good at things, and for visual / interactive media practicing these skills were most often contained in large production houses, television, and AAA game studios.

Sure, you could make a game and put it on a floppy / CD and pass it around to your friends, but your distribution, and hence your feedback was extremely limited. We’ve seen similar sketching in social media, on YouTube where entire shows are done with a laptop webcam, and of course, blogging. This review doesn’t have have to meet the editorial standards of a print publication dependent on advertisers. I’m just writing for fun about something I liked. Flash allowed its users the same opportunity - a (mostly) low barrier of entry with immediate distribution.

But, unfortunately, Flash was also scattered as a technical platform. In Flash, the final section is an interview with Jonathan Gay, the lead programmer for the Flash platform. Gay clearly outlines how many technical decisions were, for lack of a better term, just throwing out ideas. In the final chapter, “Flash and the Future,” as Flash’s popularity waned, the stewards of the platform were not exactly sure how to modify their powerful platform’s direction to make it sustainable for the mobile and application market. They ultimately failed.

The final portions of the book are disappointing to read - you learn how directionless and patched together the aspects of the platform were, which were oddly matched with a strong commitment to long-term sustainability to SWF objects. However, the earlier chapters, wonderfully detail how enriching the timeline was as a conceptual tool more than a technical one. It simply made sense to design and animation people, even as Flash’s complexity grew with the introduction of ActionScript.

This is the only level

As I finished the book, I found my emotions about Flash were scattered as well. I bought Adobe Flash for $700 when I was first starting out as a developer and read a primer in a weekend. I was impressed with the possibilities and I did so with the intention of doing app development, which was very quickly shut down by Apple. At the same time, I had bought a book on HTML5 games, and the JS / CSS combination was more intuitive to me than Flash’s system. Bottom line: I didn’t know what I had really paid for. The platform was interesting, expensive, and ultimately useless. Overall, it was an exploration more than a game changer.

Still I’m reminded that five years before that experimentation, I wasted countless hours in my dorm room on Flash games like Defend the Castle, repeatedly watched Homestar Runner cartoons, and was wowed by websites that had any degree of animation. By no means was Flash as a platform useless.

Flash deserves a eulogy, but as an audience and developers we shouldn’t feel any sadness. It was wonderful for a time, and like almost all software, it may still work, but it has served its usefulness and should be studied for lessons going forward, so that we can look for the next scattered platform that encourages us to just throw out ideas again.

The Annotated Turing: A review of reading

Charles Petzold’s An Annotated Turing was a book I had been looking forward to reading for a while. I felt I basically knew the central tenants of Turing’s Universal Machine and the halting problem (a phrase not used by Turing), but I lacked an understanding how the ideas were built up and how they were implemented.

This happens a lot with computer science. Basic algorithmic processes are simple to explain and the general operations of computer might be outline-able by the intermediately skilled user, but where the rubber meets the road is glossed over. This gap from generalization to specifics is most evident in blogs / comment threads written by technical interviewers who find that their Magna Cum Laude CS graduate interviewee can’t program FizzBuzz.

For my part, I had watched a visibly pleasing YouTube clip on Turing’s halting problem, I knew the requirements of a Turing machine, and had even read through parts of Introduction to Functional Programming through Lambda Calculus, so I felt I would be pretty comfortable finally settling down with Turing’s paper “On Computable Numbers, With an Application to the Entscheidungsproblem” guided by Petzold.

As Petzold explains at the beginning of the book, Turing’s paper could be reassembled from Petzold’s annotated version. Petzold provides a fairly thorough, but hurried explanation of the math you’ll at least need to have heard of to continue with certain sections of the paper to build up to chapter 14 – “The Major Proof.”

And this is where I fall off, and my biggest take-away from the book occurs, albeit independent of the its subject matter.

In chapter 14, as Turing comes to the conclusion that the Entscheidungsproblem is impossible, I felt nothing. Throughout the book, I knew I was missing some concepts and that I could have spent more time with the unwieldy first-order logic equations that were presented, but that wasn’t the reason I didn’t respond with “Ah! Of course!” when Turing reached his conclusion.

Instead, it was because the entire time, I was focused on how the book could be building to the YouTube videos. And for a variety of reasons, it just wasn’t there. I kept looking and assuming that certain parts were clues to what I knew rather than simply listening to what Turing was saying in the moment.

Above, I said that there is a huge difference between general understanding and detailed understanding. While there is nothing wrong with former as it eventually leads to the details, but it was an error on my part to assume that general understanding was understanding, and I distracted myself by demanding that the specific meet the general somewhere.

It’s easy to hold onto the general understanding as something solid, but to move between different levels of detail requires some degree of abandonment.

It’s the difference between “knowing about” and “knowing” a topic, and Annotated helped me understand not so much that that difference existed, but that failing to incorporate that understanding in how you read or digest a new topic can block the shift from one place to another.

Nonetheless, despite my troubles, Annotated is a worthwhile read, even for a not so worthy reader.


The most telling aspect of Superintelligence is the praise blurbs on the cover and back.

“Human civilisation is at stake” - Financial Times

“I highly recommend this book” - Bill Gates

I’m not sure what I’m supposed to feel, and it’s reflected in the general problems with the arguments in Superintelligence. Reading the book, you can quickly move from terrified by an idea to saying “huh, maybe” within the span of minutes. 

Superintelligence’s basic premise is that artificial intelligence may someday reach a point to be beyond human intelligence and most importantly beyond human control. What if this AI decides that humans are not necessary, a threat or composed reusable atoms it needs for its goals?

The author, Nick Bostrom of Oxford University’s Future of Humanities Institute, leads the reader toward the conclusion that this is indeed a very likely situation, whether through malice or ignorance of human value on behalf of this AI.

Bostrom’s chief concern is the possibility of constraining a superintelligent AI at least until we can properly trust that its activities would benefit mankind. It is the problem that is the most vague among many others: superintelligence’s motivations towards self-preservation, its ability to possibly control the world, and its ability to choose and refine goals. While all the issues are argued as inevitable given enough time, it is the “control problem” that can determine how destructive these other issues become. 

It is at this point that a further blurb about the book is necessary: “[Superintelligence] has, in places, the air of theology: great edifices of theory built on a tiny foundation of data.”

From The Telegraph, the review also argues that the book is a philosophical treatise and not a popular science book, which I would agree, and in most reactions I had when describing the book to friends, they tended to respond philosophically rather than from a technical perspective.

It is with this perspective that, Superintelligence applies a similar approach as did Daniel Dennett in Darwin’s Dangerous Idea - given enough time, anything is possible regardless of the mechanics.

The simple response is “Well, what if there isn’t enough time?”

This doesn’t suffice for Dennett’s argument (“The universe is this old, we see the complexity we do, therefore, enough time is at least this long, and we have no other data point to consider”), but it was a popular response to Superintelligence. I personally heard -  “We’ll kill each other before then.” - and - “We aren’t smart enough to do it.”

Both of these arguments, reflect the atheistic version of the faith The Telegraph suggests the reader needs, but Bostrom holds to throughout the book: given enough time, superintelligence will be all powerful and all knowing - near god-like except that it can move beyond the physical.

However, much like an atheist can withdraw value from the Gospels, even the unconvinced can remember a few sentences from Bostrom and take pause. Bostrom’s central concern is how to control technology, particularly technology that we and nobody else knows how it’s made. Moreover, this should be a concern even when programmers know how a program works, but the using public does not. It is the same concern that makes people assume nonchalantly that the government is already tracking their location and their information.

Even without superintelligence, the current conversation about technology is a shrug and admittance that that’s how it is. Bostrom leans heavily toward pacing ourselves rather than end up dead. Given our current acceptance of the undesirable in our iPhones, shouldn’t we also wonder if we should pace ourselves or pause and examine our current progress in detail rather than excitedly waiting for a new product?

This isn’t to say we should stop technological progress. Instead, alongside innovation, there needs to be analysis of every step.

Every wonder what’s in your OS’s source code? Could it be tracking you and logging every keystroke sent off to some database? What if all software was open source? Wouldn’t this solve that problem?

This isn’t a technological problem is it? The question of open source for everything is an economic and industrial question, though it may ultimately be solved by technology.

Consider that, in the last twenty years, restaurants and food producers have tied themselves not to simply producing food to eat, but the type and intent of the food they produce - is it sustainable? Is it safe for the environment? Does it reflect the locale? I imagine not too many people would be surprised to see a credo on a menu alongside the salads in this day.

What about software? Are we only to expect that kind of commitment from ex-hippies and brilliant libertarian hackers? What about Apple, Google and Microsoft? It’s an ideal certainly - once you show the Google search engine algorithm, then what’s left than for a competitor to copy it? I don’t have answer for this, but understand there is an exchange - Google keeps their competitive edge, but they also keep all my information.

We are already being victimized by unknown technology and we shrug or make some snarky comment. Even though Superintelligence argues that certain technology is inevitable, we can form how it is made.

Wouldn’t it be great if we started practicing that now?

You Must Beware of Shadows

The Eighth Commandment of The Little Schemer - use help functions to abstract from representations - is as obvious as most of the Ten Commandments. Of course you would use other functions to support or create abstraction.

To make the case more clearly, the authors attempt to represent primitive numbers with collections of empty lists ( e.g. (()) for one, (()()) for two). This is a new level of parenthetical obnoxiousness, and for a while, the reader may think - “Are they going to do this for the rest of the book?” - because the authors then go on to demonstrate that a lot of the API used thus far in the book, works for this type of representation. But, then there’s lat?

Testing reveals that lat? doesn’t work as expected with such an abstraction. The chapter concludes with the two speakers exchanging:

Is that bad?
You must beware of shadows

This isn’t a very common thing to read in an instructional programming book, hell, you’d be blown away to see this in a cookbook. 

Searching around online, I couldn’t find too many people fleshing out a very thorough explanation other than a couple chat groups and most users said “I guess they mean...but who cares?” or instead just complained about the book’s overall format. I know how I feel.

The phrase is out of place even in The Little Schemer, considering most chapters end with a recommendation to eat sweets. Nonetheless, it is a perfect for what the authors want the reader to consider.

Going back to the Eighth Commandment above, it’s a considerable summation of coding cleaning activities programmers can read up on in books such as Code Complete and Refactoring.

But why end the chapter like this and call the chapter "Shadows"?

It’s obviously a parental-level warning in the family of “Keep your head up.” While a programmer can abstract a lot, the taller an abstraction is built, the greater shadow it may be casting over operations that are still necessary or rarely necessary, which could even be more painful (read the first chapter of Release It!). The shadows cast by the edifice of your abstraction leads ultimately to bugs or worse a crack in the abstraction that can’t be patched.

It’s more delicate and literary warning than was given by Joel Spolsky about frameworks. Spolsky, as usual, is more confrontational, and aside from the possibility of him yelling at me about this topics, Schemers’ warning sticks better. It’s like a cautioning given by an old woman near a dark wood.

However, these shadows are not cast by some creepy tree, but by our own code. It’s ultimately an admonishment to test, check your abstractions in places you wouldn’t necessarily use them and be just as thorough as the creators of the primary methods of your language. And, of course, be afraid.

The Little Schemer

Thanks to Cool Pacific

The Little Schemer spells out in its introduction that the book is about recursion. Generally in programming circles, Schemer is known as a great book for learning about Scheme/Lisp (see Paul Graham for all the praise of Lisp you’ll need) or functional programming. While true, in my recent reading of the book about 20 years after its first edition, the popularity of Schemer is more important in how it presents these ideas then their practicality in code.

Topical Focus

I’ve said before that there needs to be more topically focused short consumable books for programmers, in contrast to giant tomes. It’s rarely that developers need an immense index of a language’s every aspect or need to know every algorithm, but instead they need specific cross-language experiences - algorithmic groups, object-oriented programming, and, as with Schemer, recursion, clocking in at under 190 pages.

The early Lisp families with their annoying parentheses can quickly cause someone new to the language to either give up or invent Haskell. But as this book proves, Scheme’s syntax isn’t the point. The topic is recursion and that’s it. Use a good IDE to help with your parentheses and move on and be done quickly.


Really, Schemer is a Socratic discourse between a very sharp neophyte and his guide. A very short question and answer format is maintained throughout, except for the sprinkling of recursive commandants and a handful of asides.

The format is a breather from syntactically dense books (here’s how you make variables, here’s you make arrays, classes, functions...250 pages later: You know X new JS framework), academically dense books and the “Let’s program to the Extreme with bad jokes” books.

Using this format, Schemer is as nuanced as it comes, often annoyingly so, as the authors walk through recursive functions a logic decision at a time. However, as laborious as this may be, it’s best to listen to the author’s recommendation to not rush your reading no more than you would a good conversation to experience this unique approach to programming pedagogy.

The Why of the How

A large intent of this Socratic method is to really get down to why a person makes the choices they do, which is a lot more interesting and demonstrates expertise. Take an example of asking an interviewee programmer to write out the Shell Sort on a whiteboard versus to have that same person walk you through a short array verbally using that algorithm, meanwhile explaining why it’s more efficient than insertion sort.

In a time when a common complaint is that the rush of new frameworks and languages is overwhelming and something employers except programmers to keep up with, the main question is how do I write something rather than how did the language arise in the way it did.

For its format and focus, The Little Schemer transcends the modern sense of programming instruction. It won’t be taught in a code bootcamp, because in the Schemer’s universe, coding bootcamps don’t exist because you’re not in a hurry to get a job, because there is no job to be had. Only understanding.


Joel on Software

I was introduced to Joel Spolsky’s writing a few years ago when I was learning how to interview and to run a software team. His most famous article, “The Joel Test: 12 Steps to Better Code” may appear pretty basic now, but all the things Joel was recommending were written in the year 2000 before Agile, Atlassian autobots, Chef / Puppet, Jenkins and Vagrant / Docker were considered essential to even a trivial software project.

I admire that a lot. But I wondered if there was any reason to still read Joel on Software, for the first time since its publication eleven years ago.

As it turns out, absolutely.

Early on “The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)” was a fortunate starter. While I knew what character sets were and have explained to students how a character eventually gets printed on your screen, when I started reading this article, I also had happened upon an annoying character map bug with my website. The article’s history and breakdown of how indecipherable characters appear in your emails, gave me a lot of background to know what the heck I was actually debugging the site.

“Painless Functional Specs” is an excellent pairing to the hundreds of pages of Agile methodology I have read, and that have become a generalized gospel - have faith in Scrum! I’ve learned that while, you emotionally should be flexible to change, use Kanban, take time to refactor, once projects reach a critical mass, you really do yourself a huge favor to have done a chunk of planning up front. In Joel’s belaboured case, specs are not sacrosanct, just starting points, and are what most folks today would consider paper prototypes. I have not done this for many spec projects, and I have a lot incomplete Github.

Finally, in “Don’t Let Architecture Astronauts Fool You,” Spolsky writes - “Remember that the architecture people are solving problems they think they can solve, not problems that are useful to solve.” In the age of exploding frameworks and write-code-without-coding Shangri-la’s, we are, to some degree, only reinforcing existing structures and problem approaches. Not really creating new solutions. Time savers like Rails, Angular, Unity and such are fantastic and a great entry point for those entering particular programming spaces. But as Joel points out - don’t expect a revolution because we’ve gone up one abstraction layer.

I’d add that, there’s tons of great historical information in the book that could only be written by someone living in those times contemporaneously, especially related to industry developments, old (bad) practices in software companies, and even a couple knocks at Duke Nukem Forever’s release cycle.

Ultimately, Spolsky comes off a big like the software version of Anthony Bourdain. Rough on the edges, highly critical, but overall has the best intentions for creating create software and improving people’s lives with software and work lives developing it.


So elegant

I’ve already written on the excellent Platform Series from MIT Press, and since discovering the series, the platform that I have been waiting for is the one that meant the most to me growing up – the Nintendo Entertainment System.

I AM ERROR by Nathan Altice finally dives deep into the hardware architecture and the expressive capabilities derived from the chips in the NES. I bought the book immediately, but surprisingly I wasn’t that engaged as I thought I would be.

ERROR is an excellent book, incredibly detailed, and well-researched with a breadth of topics from hardware development in Japan, assembly coding of music, and ROM hacking across the internet in the early 90s.

These topics aren’t why ERROR didn’t pull me in. Instead, it’s a victim of the series own success. Racing the Beam, the inaugural title from the series covering the Atari 2600, is also amazingly detailed particularly in the translation of games concepts through the examined hardware architecture to actual expressive gameplay.

ERROR mirrors this same level of translation, which is after all the intent of the series, and while the NES has a lot more going on, the ah-ha nature of the translation described above is much clearer in Beam and more accessible.

ERROR moves around a lot more, and specifically is concerned with the idea of “translation” given that the NES is also known in its home country of Japan as the Famicon. The book’s goal from the introduction is to determine how many perspectives on translation we can take when studying the NES, and it’s a brilliant thematic idea.

But, the hardware isn’t as interesting in this case if you have been following the series, and honestly, for a person of my acumen in the realms of PCB and chip technology, was a little over my head at times. That said, maybe one day I’ll appreciate it a lot more.

In contrast, two other titles in the series - The Future Was Here and Codename: Revolution - took their platforms, the Commodore Amiga and the Wii respectively, and while touching on the hardware, explored other aspects of expressive interaction with the machine beyond its hardware to a point where the hardware was only a stepping off point.

It’s like getting really into the details of how strings and pickups on a guitar interact, whilst most folks would only be concerned with “what does it play?”

Future primarily covered the expressive gaming and the demoscene provided by the new multimedia computer, and Revolution discussed what it meant for software and hardware to interact in a physical space. Sure, hardware had to be mentioned to start these conversations, but again, they are our baseline for further exploration of artistic and human ideas transformed into a digital medium. In this way, the books remind me a lot of 10 Print’s vignettes on maze generation code.

Fairly, I should backtrack on my criticism, which only comes from an embarrassment of riches. The intricacies required to program games cleverly on the NES are amazing, deserving a nod of respect to those developers, and are a rich primer on how graphics programming developed into a higher level of complexity than the Atari. Likewise, the chapter “2A03” on the sound chip architecture in the NES would be my first recommendation to anyone interested in sound chip programming and nice slice of humble pie for any contemporaries who currently do it with any degree of ego.

Finally, the chapter “Tool-Assisted,” while a titled after popular tool-assisted speedruns (and I’d note as well glitch fun), has a wonderful and well-explained history of hardware emulation, digging deep into IBMs history, that for even a software person not interested in games, is interesting on its own and of emergent importance for emulators used in production / development environments.

Overall, I’d recommend buying the book if you have been reading any of the Platform Series, as it is likely the most well-researched book as of yet, and if you have not read any of these titles and are not a hardware nut, reading Racing the Beam and I AM ERROR back-to-back would be excellent combination to get you started on that path.

Silicon Valley Season 2

Silicon Valley is funny. Mike Judge has a lot of cred in finding the absurd in the modern middle class and suburban, which as the shows executive producer, this style is expertly done with Valley. The show has always done a great job of actually telling jokes, and finding the humor from character motives rather than tacking what they could shoehorn in.

I really liked season one of this show, and I liked season two, which recently wrapped up its run on HBO.

Unfortunately, the show is already showing fatigue. The show’s plot revolves around the continual fight of Pied Piper versus the CEO of Hoolie attempting to gain control of Piper’s middle-out data compression algorithm through any means necessary as the staff of Piper try to secure funding to fully launch their product.

The problem with this setup is that it is completely episodic and random – "Oh look, Hooli is pulling some bullshit legal tactic, oh now they’re doing something else I already forgot about because it’s brushed off as soon as it happened, WHAT? our angel investor is a crazy person and now he’s doing weird eccentric thing X, which as with Hooli we’ll discard as a memory and plot thread when the credits roll."

It’s not a story that actually builds, and because of that, when the Piper crew eventually succeeds, I don’t really have a sense that they were up against much, but just annoyed throughout the season.

In contrast, season one actually built towards the Tech Crunch Disrupt, and involved more than outside annoyances - the relationships of the team, transition to a real company, and competition. A lot more intuitive, and as an audience member you can anticipate conflict and what you would expect from a growing company.

Which all of that totally sucks as this season actually had more heart.

Richard’s pep talk that the team was there and existed to “build epic shit” alongside Jared’s maudlin speech that he has had the best time being at Pied Piper even though it involved so much stress, which takes something really commodified, the start-up, and actually gives it some warmth.

There’s a lot of folks in Silicon Valley and here in Seattle less concerned about what they are passionate about, and more concerned with what will sell, and this is more insidious than some huge company like Hooli. It’s an internal and intentionally adopted destruction of one’s dreams, rather than an antagonistically driven defeat.

The Piper crew appears to genuinely want to do more for people, and the speeches delivered by Jared and Richard legitimize this drive contrasted excellently with Belson’s comment that “I don't want to live in a world where someone else is making the world a better place better than we are.”

I look forward to season three, but I hope that the humor and conflict can grow from these heartfelt motives in order to find insight within the laughs.

Cheers to small books

Recently I picked up Thomas Schwarzl’s 2D Game Collision Detection. It’s a couple years old, and I already have books that are much more thorough on game development such as Mathematics and Physics for Programmers and Game Physics Engine Development, but I wanted a book that was as stripped down and direct as possible on its topic, which I needed a little boning up on.

And indeed the book did exactly what it said it would do and no more, a result that a lot of programming books in general fail to deliver on. Simple question - how do I solve the tunneling problem in collision detection? For Schwarzl it’s a couple pages, and for a lot of books their thoroughness in answering this programming problem can become less of a technical issue and more of a readership problem.

While I wouldn’t knock books attempting to provide a lot of knowledge for my page-buying buck, I would say instead that there is a space missing in programming literature, which is the small book, and in particular, the small specialized book.

I have a couple of algorithm books on my shelves (well, in stacks on my floor), but some of these, while serving as rich collegiate textbooks aren’t anything that can really engage the beginner or improve the intermediate developer.

By all means, I won’t return Sedgewick’s Algorithms, however, I would like to see books along the lines of 4 Sorting Algorithms that barely trump 90 pages. Books of this variety wouldn’t be intended to be the end of all of study, but could provide a low cost of entry into a new subject without intimidation, high cost or excessive detail.

Sometimes, you just want to be told in a few sentences what a bubble sort is without, for the moment, worrying about its O-notation compared to other algorithms. Similar series elsewhere - A Very Short Introduction, How to Read…, and every book on meditation - demonstrate the practicality and demand for reading of this type.

The You Don’t Know series focused on JavaScript from Kyl Simpson is a perfect example - meeting all these criteria: short, detailed in a specialized topic and not too much extraneous information. Want to know about closures thoroughly but JUST closures, well Simpson’s written that book.

As literacy in programming expands, I do expect that these types of books will appear more consistently and we’ll see a move away from textbook or language survey tomes and more about how to use languages involving very specific topics in a way that readers can quickly consume and then apply their understanding.

Course I guess I could start writing them myself...


Subscribe to Review