#!: Predictable Randomness

My algorithmic poetry

Nick Montfort released two books in 2014 that deal with randomness and expression. The first, 10 Print, I reviewed a couple years back and found it to be somewhat indulgent but ultimately worth the read, and I recently got around to his second of that year #! (Shebang).

In a similar style to 10 Print, #! uses a variety of algorithms to produce poetry without human interaction, showing the code at the beginning of each chapter and the resulting poetry following. Many of the contributors state that they groomed the produced text for the best bits, but I wouldn’t say that’s any different than using a maze building algorithm and selecting the best results for a game’s levels. Either way, the algorithm still produced the original content.

So let’s get to the obvious question - Is machine poetry any good?

Well, not really. I imagine that’s not too surprising of an answer. Many of the poems are not so much poems, but patterns of letters as in “All the Names of God,” which slowly builds up longer combinations of letters by cycling through the alphabet, and “Alphabet Expanding,” which is exactly what it sounds like: the alphabet written repeatedly with each loop increasing the space between the letters.

Aside from these pattern generators, many of the poems use a bank of phrases or letter groups that are chosen mostly at random to create semi-coherent poems. However, the phrases and groups have been so selected because they’d sound poetic in just about any combination.

Next obvious question then is - At the very least is it interesting?

From a technical standpoint, no. For an intermediate programmer, it may be interesting to see how they could code bits of programs dealing with randomness or pattern making in a succinct way, which was part of Montfort’s argument for the elegance found in 10 Print.

However, what 10 Print did well was allow its writers to imbue the mazes with meaning in different context whether that was artistic, ludic, or technical. Here, the proof is on the page. Do you enjoy what you are reading? And unfortunately, the answer is mostly no.

#! at the very least is neat looking in certain sections and will be a great book on my shelf to confuse my kids when they’re young. Thematically, it fits in perfectly with Montfort’s other work on creativity, expressiveness, and processing, so it is no surprise he took a chance with this topic.

Unfortunately, the book’s contributors didn’t do much in terms of varying their algorithms, each trying to make a simple algorithm that was roughly half a page down to one line depending on their text bank.

The approach feels lazy as there isn’t much to explore on the topic of procedurally generated poem, because the procedure is basically the same every time with a little bit of flair thrown in here and there.

This is definitely a topic I imagine Montfort and others will circle back on as time goes on, but I hope when #! 2: Electr1c Boogal0o comes out, there's a chance to see different approaches to procedural literature beyond nested loops with calls to a rand() function.

Flash: an Unintended Eulogy

On July 25, 2017 Adobe announced they were ending development for the Flash platform in 2020. Three years earlier when Anastasia Salter and John Murray wrote Flash: Building the Interactive Web, they concluded that while the Flash platform was seeing a descent, the journalistic eulogizing of the platform was undeserved as while “Flash may die someday...the web will be resplendent in its progeny.”

Despite their conclusion, I think Flash deserves a proper eulogy, and this book is it.

As another excellent edition from MIT Press’ Platform Series, throughout Salter and Murray have to contend with what a lot of people’s last memory of Flash has been - the divisive fight, picked by Apple, led by Steve Jobs, who described Flash as a platform that “falls short” in the mobile era. Comments like this from Jobs will most likely be considered the most historically damning and the source of many of the immediate eulogies that circulated on the web in 2010. However, Apple’s damnation, Salter and Murray point out, also drew out the problems with Apple’s advocacy for its own app model and the difficulties facing open web standards.

“While Flash’s marketplace was completely free, without any intervention by Adobe...Apple had a different version for the web,” they write with a small feeling of the annoyance that plenty of app developers have felt since the opening of the App store with its myriad of rules and regulations, that are often more about content management than any technical necessity, most notably with the banned app Phone Story, a casual game that had players saving suicide victims at Apple factories and brutalizing children to dig for minerals that are used in chip manufacturing.

Even in regards to open web standards, anyone over thirty most likely remembers “the browser wars”, wherein the compatibility of web standards was sent off the rails as Mozilla, WC3, and Microsoft viad for being the standard's leader, a battle that has left scars in the landscape web developers still feel today.

The authors argue that in the midst of the wars and the development of the app marketplaces, the centralized nature of Flash allowed it to have consistent compatibility (even today Flash has backwards compatibility into to the 90s) and the platform’s open distribution model was “essential to free expression.”

While a nice snub on Flash’s critics, throughout the chapters it's clear that Flash has had a scattered nature - from its development history to the quality of content put out by its users - which ultimately made it brilliant as a creative platform. Scattered may sound disparaging, but I don’t mean it to sound so.

With Flash you could publish anything, amateurish or professional, and it was accessible on the majority of browsers. In this way, artists and developers were able to experiment with short films, design, video games, narration, UI and any other interactive environment they could dream up. NewGrounds is repeatedly pointed to as a hub of experimentation. In these spaces, people could do what often was missing in media production - doing sketches and traces. This is how people get good at things, and for visual / interactive media practicing these skills were most often contained in large production houses, television, and AAA game studios.

Sure, you could make a game and put it on a floppy / CD and pass it around to your friends, but your distribution, and hence your feedback was extremely limited. We’ve seen similar sketching in social media, on YouTube where entire shows are done with a laptop webcam, and of course, blogging. This review doesn’t have have to meet the editorial standards of a print publication dependent on advertisers. I’m just writing for fun about something I liked. Flash allowed its users the same opportunity - a (mostly) low barrier of entry with immediate distribution.

But, unfortunately, Flash was also scattered as a technical platform. In Flash, the final section is an interview with Jonathan Gay, the lead programmer for the Flash platform. Gay clearly outlines how many technical decisions were, for lack of a better term, just throwing out ideas. In the final chapter, “Flash and the Future,” as Flash’s popularity waned, the stewards of the platform were not exactly sure how to modify their powerful platform’s direction to make it sustainable for the mobile and application market. They ultimately failed.

The final portions of the book are disappointing to read - you learn how directionless and patched together the aspects of the platform were, which were oddly matched with a strong commitment to long-term sustainability to SWF objects. However, the earlier chapters, wonderfully detail how enriching the timeline was as a conceptual tool more than a technical one. It simply made sense to design and animation people, even as Flash’s complexity grew with the introduction of ActionScript.

This is the only level

As I finished the book, I found my emotions about Flash were scattered as well. I bought Adobe Flash for $700 when I was first starting out as a developer and read a primer in a weekend. I was impressed with the possibilities and I did so with the intention of doing app development, which was very quickly shut down by Apple. At the same time, I had bought a book on HTML5 games, and the JS / CSS combination was more intuitive to me than Flash’s system. Bottom line: I didn’t know what I had really paid for. The platform was interesting, expensive, and ultimately useless. Overall, it was an exploration more than a game changer.

Still I’m reminded that five years before that experimentation, I wasted countless hours in my dorm room on Flash games like Defend the Castle, repeatedly watched Homestar Runner cartoons, and was wowed by websites that had any degree of animation. By no means was Flash as a platform useless.

Flash deserves a eulogy, but as an audience and developers we shouldn’t feel any sadness. It was wonderful for a time, and like almost all software, it may still work, but it has served its usefulness and should be studied for lessons going forward, so that we can look for the next scattered platform that encourages us to just throw out ideas again.

It's not a perk

Photo via Geekwire

This week Microsoft announced that it would be building a cricket pitch in its envisioned Redmond campus redesign. Blog posts about the new design, excitedly pointed out that this is demonstrative of both Microsoft’s lagress and its changing workforce that includes more professionals from foreign countries where cricket is popular.

While certainly demonstrative that MS is appreciating the cultures of its staff, I have one question - why don’t you do that by letting them go home on time?

Though less noted since Ballmer's departure, MS is not known as some heart warming company that just wants to have to a good time and make exciting technology. Like most companies, they expect to get their money’s worth when they invest in their employees. I won’t knock MS for its payscale and benefits, and along with that it shouldn’t be any surprise that they would ask a lot of their staff.

So as the perks go up, anyone in technology should know something - they expect a return on investment. MS isn’t doing this for lagress - they are doing it as an advertisement of how great of company they are, how cool they are, and how it’s all about having fun. But all that comes at a price. If they actually cared about people just playing cricket they make a pitch open to the public in a park. They could even call it the Microsoft Office 2018 Park for all I care.

For years I have worked for companies, not all of them mind you, that thought that a once a month happy hour, including one free beer, or having Cokes in the fridge was compensation for fourteen extra hours a week. Even at $10 an hour minimum wage, a couple beers doesn’t pan out for anyone except the employer in that situation.

Instead of a happy hour on Friday, can I leave an hour early or just stay so I don’t have to stay late on Monday? Cause work still needs to get done.

The more perks I hear a company put in their company profile, the more concerned I am. Health insurance is ultimately a salary negotiation, but noting you have a sweet pool table and Jimmy Jon’s every Tuesday, tells me I’ll be expected to work late and not leave the office on Tuesday.

I’m not especially annoyed about cricket pitch in Redmond. I’m not losing sleep or thinking that corporate overlords won. I’m sure ultimately it will allow for intramural sports between the staffs and that it will also be a place of appreciation days for employees.

Ultimately however, while MS shouldn’t look like a fucking 1930s coal factory, we also shouldn't kid ourselves that “campuses” are there for our benefit. As Bill Gates in The Simpsons so well put - I didn’t get rich by signing a bunch of checks.


Driving in an Empty Room

Photo by Jaromír Kavan on Unsplash

Let’s set a scenario, the varieties of which you can color however you like - a self-driving car is going down a road next to a ravine and suddenly a group of school children jump out in front of the car. The car’s AI now has a decision to make - kill the children or kill the driver.

This hypothetical and its variants are intended to force us as a technological culture to confront the dangers and philosophical implications of our promised future. How can do we teach computers who lives and who dies when we as humans can’t even make that decision, even with perfect timing?

The problem is that the question is idiotic. I mean that in the kindest possible way - it’s the type of question regarding technology that attempts to sound profound, but is like deciding world politics using a Risk board.

Let’s take almost this exact situation without a self-driving car:

What happened? Did the driver have to make a decision between the kid’s life and their own? No - the driver did exactly what a self-driving vehicle would do. It braked.

Of course this was only possible by the revolutionary technological breakthroughs in braking systems. Had that been a truck that was fifteen years older and we would have seen a much darker end. There wouldn’t have even been an option to swerve out of the way regardless of how responsive the driver was.

The point here is that a seemingly dull piece of technology, braking systems, suddenly makes this apparently intense philosophical question pointless. While the tram problem or the example above might be fun to superficially warn about the dangers of technology, it is the associated pieces of technology that go along with self-driving cars that will resolve these problems - not solving philosophy.

If you looked at the first Wright Brothers plane and then imagined it flying at 30000 ft, would you question - is it worth the fast travel if some passengers will freeze to death? No. That’s stupid - you’d instead think, as history has shown, that you aren’t going to fly that high until you build safe environments in which that is no longer a question.

Take an example from the film Demolition Man. A silly movie to be sure, however, one piece of technology has always stood out to me - the styrofoam crash mechanism. For those who haven’t seen the movie, to Stallone’s surprise, when he crashes a car for the first time, the body completely transforms into styrofoam that pads his impact, removes shattering glass, and allows him to quickly break from from the wreckage.

It’s a fun visual gag, however, imagine our self driving car that has a similar mechanism. What, beyond inconvenience, does it matter if you fly off the ravine if your car turns into a safe bubble? There’s no philosophical problem there, only an insurance problem.

While the barstool criticism of self-driving cars ponders how to decide how best robots can serve man, keep in mind that a car is more than its driver. Even modern vehicles attempt to do their best to mitigate the harm to the driver and to those around them, regardless of the decisions of whoever is behind the wheel.


Why I Taught Ruby

Photo by Jonathan Simcoe on Unsplash

In my second year of teaching, I changed programming languages. I previously taught my introductory programming class in Java. I wrote up exactly why I thought that was a good idea. I was wrong.

With the change of language, the class’s content changed along with it. With Ruby, the class moved from basic programming concepts - variables, loops, conditionals - to teaching basic sorting, trees, encryption and random number generation. With Java, the class moved from those basics, but instead was intended to culmination in a OOP-driven standalone application. Again, that was wrong.

Here is why Ruby worked:

Platform Accessible

I said this about Java too, but with Ruby, I decided to leverage Repl.it instead of depending upon the students’ dealing with their own laptops to do development. The platform allowed them to program in class, save their work online, and send me assignments through the Repl.it system. Additionally I could use the same platform to provide code samples and give out assignments.

Theoretically I could have done this with Java as well, but with the focus on Java on being able to spin up real applications and work in a real IDE (Eclipse in my case), running it through Repl.it would have defeated this achievement as they couldn’t actually make a real JAR (at the time).

Compilation and typing are still there

How do you explain compilation - it converts english and symbols to machine instructions. And you have to usually type a command or push a button do this, like printing a document. Done. No need to see the little progress bar on the screen. 

Typing? Even with dynamic typing in Ruby, learning the language still requires the sense that variables are treated differently based upon whether they are a Hash, number, string or object. The lesson isn’t lost by having to explain what a double is.

With those easy topics dealt with verbally rather than struggling through syntax, I made lessons harder

While previously I praised the fact that students had to focus a little more with Java because of curly brackets, typing variables, and just the overhead of running a hello world program, that is a shitty check on student ability. It’s like telling music students that they have to tune the piano before they can play it.

Ruby is extremely easy to get started and has a relaxed transition when moving from basics to OOP. With this in mind, I was able to breeze past variables, conditionals, loops and functions so that we could eventually discuss more difficult topics.

OOP is generally pointless for students

I still covered OOP in my class, but it was more as a demonstration in how to properly uses classes. In this case, the lack of emphasis on OOP in Ruby (as compared to Java at least) and its syntactic focus on elegant short lines of code made me focus on how we could work with really interesting algorithms in a succinct bits of code.

Think about - is it more interesting for new students to see how a binary tree automatically sorts numbers in a few lines or have them build another damn Car class with Wheel objects?

Students new to programming should learn what’s possible out there and see it working, not how to build application stable code.


Overall, Ruby took my sights off of OOP and its clean syntax forced me to move past the minutia that students shouldn’t need to concerns themselves with. Instead, they were given the chance to see how much they could do with a few lines and a computer science mindset.

They can learn about doubles versus floats later if they like.

White Noise

I listen to white noise most of the day. It’s the background sound that drowns out my tinnitus that isn’t a distraction like letting Netflix run in the background. While plenty of folks make claims that it boosts brainpower, I’d argue it’s probably because folks tend to be more focused.

While sites like Noisli and A Soft Murmur allow users to make general white noise sounds to cancel or at least drown out tinnitus and nearby room noise, I find that actually noisy sounding white noise is best for me.

In order to add my two cents to the endless stream of opinions on the Internet, I’ll recommend a few videos from two channels that I use almost everyday.

Sleep Ambient

This channel has two main focuses - general fantasy settings and video game environments. It’s particularly good for its blend of environmental noises - the weather, the interior sounds, active agents (typically animals) and maybe water.

It’s also a good place to appreciate the sound designers of video games and their level of detail for environments the player is typically running past. The channel is often in fact recordings of just letting the game run without moving the controller, as every now and then game notifications will pop up.

Harry Potter ASMR

I liked the Harry Potter series but even if you didn't, this channel does an excellent job of not only doing environmental mixes as Sleep Ambient does, but also creating shifts in those environments over time, such as with the "Dumbledore's Office" where presumbably he wanders to grab a book or in the Three Broomsticks where the noise will get louder as the pub fills up and quieter as guests leave. Of course, you only see a few minor shadows moving in and out of the pub.

I greatly appreciate the people who put these mixes together. For anyone with tinnitus or prone to easy distraction, the soundscapes provide an ignorable level of distraction and the right amount of audio buffer.

I would been in a massive debt to YouTube if they would set the commercials to ones that don’t involve people screaming right before or in these middle of these videos. 


The Annotated Turing: A review of reading

Charles Petzold’s An Annotated Turing was a book I had been looking forward to reading for a while. I felt I basically knew the central tenants of Turing’s Universal Machine and the halting problem (a phrase not used by Turing), but I lacked an understanding how the ideas were built up and how they were implemented.

This happens a lot with computer science. Basic algorithmic processes are simple to explain and the general operations of computer might be outline-able by the intermediately skilled user, but where the rubber meets the road is glossed over. This gap from generalization to specifics is most evident in blogs / comment threads written by technical interviewers who find that their Magna Cum Laude CS graduate interviewee can’t program FizzBuzz.

For my part, I had watched a visibly pleasing YouTube clip on Turing’s halting problem, I knew the requirements of a Turing machine, and had even read through parts of Introduction to Functional Programming through Lambda Calculus, so I felt I would be pretty comfortable finally settling down with Turing’s paper “On Computable Numbers, With an Application to the Entscheidungsproblem” guided by Petzold.

As Petzold explains at the beginning of the book, Turing’s paper could be reassembled from Petzold’s annotated version. Petzold provides a fairly thorough, but hurried explanation of the math you’ll at least need to have heard of to continue with certain sections of the paper to build up to chapter 14 – “The Major Proof.”

And this is where I fall off, and my biggest take-away from the book occurs, albeit independent of the its subject matter.

In chapter 14, as Turing comes to the conclusion that the Entscheidungsproblem is impossible, I felt nothing. Throughout the book, I knew I was missing some concepts and that I could have spent more time with the unwieldy first-order logic equations that were presented, but that wasn’t the reason I didn’t respond with “Ah! Of course!” when Turing reached his conclusion.

Instead, it was because the entire time, I was focused on how the book could be building to the YouTube videos. And for a variety of reasons, it just wasn’t there. I kept looking and assuming that certain parts were clues to what I knew rather than simply listening to what Turing was saying in the moment.

Above, I said that there is a huge difference between general understanding and detailed understanding. While there is nothing wrong with former as it eventually leads to the details, but it was an error on my part to assume that general understanding was understanding, and I distracted myself by demanding that the specific meet the general somewhere.

It’s easy to hold onto the general understanding as something solid, but to move between different levels of detail requires some degree of abandonment.

It’s the difference between “knowing about” and “knowing” a topic, and Annotated helped me understand not so much that that difference existed, but that failing to incorporate that understanding in how you read or digest a new topic can block the shift from one place to another.

Nonetheless, despite my troubles, Annotated is a worthwhile read, even for a not so worthy reader.


The most telling aspect of Superintelligence is the praise blurbs on the cover and back.

“Human civilisation is at stake” - Financial Times

“I highly recommend this book” - Bill Gates

I’m not sure what I’m supposed to feel, and it’s reflected in the general problems with the arguments in Superintelligence. Reading the book, you can quickly move from terrified by an idea to saying “huh, maybe” within the span of minutes. 

Superintelligence’s basic premise is that artificial intelligence may someday reach a point to be beyond human intelligence and most importantly beyond human control. What if this AI decides that humans are not necessary, a threat or composed reusable atoms it needs for its goals?

The author, Nick Bostrom of Oxford University’s Future of Humanities Institute, leads the reader toward the conclusion that this is indeed a very likely situation, whether through malice or ignorance of human value on behalf of this AI.

Bostrom’s chief concern is the possibility of constraining a superintelligent AI at least until we can properly trust that its activities would benefit mankind. It is the problem that is the most vague among many others: superintelligence’s motivations towards self-preservation, its ability to possibly control the world, and its ability to choose and refine goals. While all the issues are argued as inevitable given enough time, it is the “control problem” that can determine how destructive these other issues become. 

It is at this point that a further blurb about the book is necessary: “[Superintelligence] has, in places, the air of theology: great edifices of theory built on a tiny foundation of data.”

From The Telegraph, the review also argues that the book is a philosophical treatise and not a popular science book, which I would agree, and in most reactions I had when describing the book to friends, they tended to respond philosophically rather than from a technical perspective.

It is with this perspective that, Superintelligence applies a similar approach as did Daniel Dennett in Darwin’s Dangerous Idea - given enough time, anything is possible regardless of the mechanics.

The simple response is “Well, what if there isn’t enough time?”

This doesn’t suffice for Dennett’s argument (“The universe is this old, we see the complexity we do, therefore, enough time is at least this long, and we have no other data point to consider”), but it was a popular response to Superintelligence. I personally heard -  “We’ll kill each other before then.” - and - “We aren’t smart enough to do it.”

Both of these arguments, reflect the atheistic version of the faith The Telegraph suggests the reader needs, but Bostrom holds to throughout the book: given enough time, superintelligence will be all powerful and all knowing - near god-like except that it can move beyond the physical.

However, much like an atheist can withdraw value from the Gospels, even the unconvinced can remember a few sentences from Bostrom and take pause. Bostrom’s central concern is how to control technology, particularly technology that we and nobody else knows how it’s made. Moreover, this should be a concern even when programmers know how a program works, but the using public does not. It is the same concern that makes people assume nonchalantly that the government is already tracking their location and their information.

Even without superintelligence, the current conversation about technology is a shrug and admittance that that’s how it is. Bostrom leans heavily toward pacing ourselves rather than end up dead. Given our current acceptance of the undesirable in our iPhones, shouldn’t we also wonder if we should pace ourselves or pause and examine our current progress in detail rather than excitedly waiting for a new product?

This isn’t to say we should stop technological progress. Instead, alongside innovation, there needs to be analysis of every step.

Every wonder what’s in your OS’s source code? Could it be tracking you and logging every keystroke sent off to some database? What if all software was open source? Wouldn’t this solve that problem?

This isn’t a technological problem is it? The question of open source for everything is an economic and industrial question, though it may ultimately be solved by technology.

Consider that, in the last twenty years, restaurants and food producers have tied themselves not to simply producing food to eat, but the type and intent of the food they produce - is it sustainable? Is it safe for the environment? Does it reflect the locale? I imagine not too many people would be surprised to see a credo on a menu alongside the salads in this day.

What about software? Are we only to expect that kind of commitment from ex-hippies and brilliant libertarian hackers? What about Apple, Google and Microsoft? It’s an ideal certainly - once you show the Google search engine algorithm, then what’s left than for a competitor to copy it? I don’t have answer for this, but understand there is an exchange - Google keeps their competitive edge, but they also keep all my information.

We are already being victimized by unknown technology and we shrug or make some snarky comment. Even though Superintelligence argues that certain technology is inevitable, we can form how it is made.

Wouldn’t it be great if we started practicing that now?

Bare Metal on Raspberry Pi B+: Part One

One of the primary reasons I purchased my Raspberry Pi B+ is I thought that there was probably a way to do bare metal programming on it. “Bare Metal” is a general term for writing code that directly talks to a computer’s processor, essentially low level programming.

Fortunately, a professor at Cambridge University released a series called “Baking Pi,” taking students to a basic interactive operating system, all built in assembly in the Raspberry Pi B with some minor start up help to get the OS loaded.

You’ll note this post is about the B+, which does not perfectly mirror the B in the tutorial. I’ve shared my code that converts correctly the B to B+ address changes if you are looking for a quick answer on how to do the introductory exercises.

However, I have a little more advice, both generally when introducing yourself to bare metal (somebody is gonna nail me for liberally using that term) and in the conversion to the B+:

Read Documentation on ARM

The explanation for the assembly commands provided by Cambridge U is a nice intro, but they do skim over a couple minor details just to get the project rolling. While I totally understand the intent, I would highly recommend pausing at each assembly command and read the documentation at the ARM Information Center. The information is still pretty sparse here as well, but it’s a good way to wade into the ARM documentation tome in a directed manner.

Read the Peripheral Manual

Robert Mullins, the “Baking” author, repeatedly mentions he knows where the locations of the GPIO RAM locations are because he read the manual. Unfortunately, in my case, I was using the B+, which has a different memory map than the B. So I had to look up where the actual locations for the GPIO memory.

Fortunately, a Raspberry Pi forum pointed me in the right direction, a link I’ve lost, but nonetheless I was still forced to go into the manual. This turned out to be very helpful as once I got my head around how the guide outlined RAM and its functions, it actually started to make sense, though with plenty of “Okay, if you say so.”

Similar to my recommendation on ARM assembly, even if you’re using the B, double-check everything that Mullins says just for the experience of finding it yourself. Sorta like how librarians used to force you to use the Dewey Decimal Cards.

Build a pipeline

Not surprisingly, doing bare metal required a lot of trial and error, particularly since I couldn’t get error reports. I know there are ways to use a JTAG to deploy and monitor boot, but this is actually something I don’t have very much experience with or equipment to support. Nope, instead I just pulled that flash drive in and out of my Mac every time.

Save yourself 5 seconds every minute - write yourself a script that runs build commands and properly ejects your disk and provides error escape codes in case you made a typo. It’s worth it after a couple hours of work. I have included a sample script in my repo.

Create Status Functions

After the first lesson in Cambridge’s OK sequence, you’re able to turn on the Pi’s ACT light. The second lesson explains how branching works to create functions. With this little bit of information, you can create a pseudo breakpoint function to at least observe where in the code you’ve reached or test values for equality (if you read up on your ARM codes). It’s a bit of a nuclear option, but it’s the only feedback you can get without a more advanced setup.

Start Building your System Library Immediately

Right alongside my last point, start creating a system library and organizing your code around that for basic things like the breakpoint status methods, pointing to the GPIO region and so forth. While you can follow along with Mullins’ examples, you never know how far the project will take you and it’d be nice to start setting good refactored APIs sooner rather than later.

Furthermore, it’s a nice test to check if you understood everything that was discussed if you can at the very least rename and move around code while maintaining its functionality. Plus you can use cool names like “SYS_OPS_SOMETHING.”

Read Elements of Computing Systems

This is an amazing introduction to systems at this level, without a lot of concern for peripherals, having to reload flash cards, and toy with stack arithmetic when all you can see is a little light. In fact, the tools provided by the authors of the book allow you to actually see the various memory locations in both RAM and the processor. Though not what you need to get really hands on, the conceptual framework was a great resource as I dove into Cambridge’s lessons, particularly when stack organization standards arose.


Overall, it’s a fantastic series and the albeit small process of converting to the B+ forced me to examine what was actually happening and lean on some more general coding conventions and knowledge.


Crypto: an oral history of numbers and hearing the same damn thing

Not my key

“What crypto policy should this country have? Codes that are breakable or not?”

RSA encryption co-inventor, Ron Rivest’s absolutely not hypothetical question in 1992 was all the more prescient this past year as the US government began to press Apple to begin decrypting the company’s iPhones for the purposes of national security. It was an all too familiar back-and-forth between social advocates, technology experts and the government. Rivest’s question still lingers: does the public have the right to secure codes?

My personal opinion is yes. If you disagree, the reality is that that’s too bad.

Steven Levy’s Crypto is an oral history more detailed than my barstool argument. As chronicled in the book, the general situation over the last half century plus is that governments, in particular the NSA, have had the monopoly on code breaking and encryption, so much so that for many behind “The Triple Fence,” it appeared to be an absolute waste of time to study cryptography - why waste your time since you weren’t gonna really need codes, and even if you made one, the NSA or another government had far more resources to crack it.

Despite their omnipotence, there was one problem that governments had yet to solve. Regardless of how good your or any government’s code was, at some point, you still needed to hand off the key to the recipient. This key could be stolen of course. In the 60s and 70s, this bothered a young mathematics student, Whit Diffie, so much, that he spent years on the another critical question - how do you solve the key exchange problem?

This question creates the dividing path in Crypto’s mostly oral history of cryptography. Governments had lots of ways to sneak keys around, and they had plenty of ways to generate new codes with new keys quickly and know exactly, hopefully, which messages may have been comprised. For the average person, these resources were clearly outmatched. So there was a reason for the above average person to study cryptography - defeating the key exchange issue so you could trust getting and receiving messages.

This lead Diffie and an electrical engineering professor, Martin Hellman, to create public key cryptography. It’s what everybody now uses on the web, but more importantly had been a vague spectre to the NSA years. The idea is brilliantly simple and generates encrypted messages where key interception is no longer an issue, because the key that’s transmitted, the public key, everyone is already aware of.

Wikipedia's simple explanation

Following Diffie and Hellman’s conceptual breakthrough, Rivest and two others combined the public key idea with one way functions, that is encrypting something so that you can’t reverse the process, and then the insanely powerful but easy to use encryption system became a usable rather the conceptual system.

A third of the way into Crypto, the clash between these unlikely and unexpecting crypto-heroes begins to unfold. As the creators of this technology attempt to cash in and lead the way for email, ecommerce and Bitcoin, others attempt to give it away as a moral principle, meanwhile the US government attempts to stop them all.

It’s a brilliant story, built on eight years of interviews where surprisingly was the same arguments from decades ago echo from the past to our current political climate.

Though, he never says it, Levy’s focuses in questions and stories that greatly resemble Daniel Dennett’s idea that Darwin’s evolutionary theories were a form of universal acid. This is acid that is so strong that it can burn through anything, even its containers, so to create it, means that it will inevitably burn through the earth.

Regardless of any mistakes Darwin may have made, the idea still holds, and Levy’s history demonstrates this exact same point - once public key cryptography and one-way functions were out, it didn’t matter if the government tried to reduce the key size to how ever many bits it felt it could easily crack in the name of national security. The idea was enough that someone with the proper motivation in the world could simple just go ahead and create something more powerful at 1024-bytes once the computing power allowed it.

Likewise, today, with the US government pressuring Apple, it means very little. Most folks, particularly those who engage in crime, know about burner phones, and it’s not as if data centers and smartphone manufacturers only exist in the US. If you’re running a terrorist cell, just don’t use your iPhone for crime. The idea of encryption is already out there. To this exact point - Levy’s epilogue is about a young post-WWII British government cryptographer who invented, and was prevented from ever speaking about, public key cryptography.

Levy makes a poignant argument in his closing pages that encryption created by public key cryptography and the RSA algorithm were ultimately beneficial, and attempts to always have a backdoor made consumers not trust, and therefore not use, US products, hurting software and commerce generally.

Though written in 2001, Crypto’s history is acutely relevant to our present situation and baseline to anyone who wants to move from the barstool to coffee shop at the very least.

Levy’s book is also one of the better historical works on computing history that took great pains to find the original people involved and interview them in depth. Often computing history is surrounded by the hype of wealth (ahem, Social Network) rather the intrinsic value of the technology. When cryptography and the next hot topic become so personal and integrated in all aspects of our lives, books like Levy’s are all the more critical to generating and informed discussion and a way to find a path forward, instead of rehashing the same tired arguments