Why I Taught Ruby


Photo by Jonathan Simcoe on Unsplash

In my second year of teaching, I changed programming languages. I previously taught my introductory programming class in Java. I wrote up exactly why I thought that was a good idea. I was wrong.

With the change of language, the class’s content changed along with it. With Ruby, the class moved from basic programming concepts - variables, loops, conditionals - to teaching basic sorting, trees, encryption and random number generation. With Java, the class moved from those basics, but instead was intended to culmination in a OOP-driven standalone application. Again, that was wrong.

Here is why Ruby worked:

Platform Accessible

I said this about Java too, but with Ruby, I decided to leverage Repl.it instead of depending upon the students’ dealing with their own laptops to do development. The platform allowed them to program in class, save their work online, and send me assignments through the Repl.it system. Additionally I could use the same platform to provide code samples and give out assignments.

Theoretically I could have done this with Java as well, but with the focus on Java on being able to spin up real applications and work in a real IDE (Eclipse in my case), running it through Repl.it would have defeated this achievement as they couldn’t actually make a real JAR (at the time).

Compilation and typing are still there

How do you explain compilation - it converts english and symbols to machine instructions. And you have to usually type a command or push a button do this, like printing a document. Done. No need to see the little progress bar on the screen. 

Typing? Even with dynamic typing in Ruby, learning the language still requires the sense that variables are treated differently based upon whether they are a Hash, number, string or object. The lesson isn’t lost by having to explain what a double is.

With those easy topics dealt with verbally rather than struggling through syntax, I made lessons harder

While previously I praised the fact that students had to focus a little more with Java because of curly brackets, typing variables, and just the overhead of running a hello world program, that is a shitty check on student ability. It’s like telling music students that they have to tune the piano before they can play it.

Ruby is extremely easy to get started and has a relaxed transition when moving from basics to OOP. With this in mind, I was able to breeze past variables, conditionals, loops and functions so that we could eventually discuss more difficult topics.

OOP is generally pointless for students

I still covered OOP in my class, but it was more as a demonstration in how to properly uses classes. In this case, the lack of emphasis on OOP in Ruby (as compared to Java at least) and its syntactic focus on elegant short lines of code made me focus on how we could work with really interesting algorithms in a succinct bits of code.

Think about - is it more interesting for new students to see how a binary tree automatically sorts numbers in a few lines or have them build another damn Car class with Wheel objects?

Students new to programming should learn what’s possible out there and see it working, not how to build application stable code.

 

Overall, Ruby took my sights off of OOP and its clean syntax forced me to move past the minutia that students shouldn’t need to concerns themselves with. Instead, they were given the chance to see how much they could do with a few lines and a computer science mindset.

They can learn about doubles versus floats later if they like.

White Noise

I listen to white noise most of the day. It’s the background sound that drowns out my tinnitus that isn’t a distraction like letting Netflix run in the background. While plenty of folks make claims that it boosts brainpower, I’d argue it’s probably because folks tend to be more focused.

While sites like Noisli and A Soft Murmur allow users to make general white noise sounds to cancel or at least drown out tinnitus and nearby room noise, I find that actually noisy sounding white noise is best for me.

In order to add my two cents to the endless stream of opinions on the Internet, I’ll recommend a few videos from two channels that I use almost everyday.

Sleep Ambient

This channel has two main focuses - general fantasy settings and video game environments. It’s particularly good for its blend of environmental noises - the weather, the interior sounds, active agents (typically animals) and maybe water.

It’s also a good place to appreciate the sound designers of video games and their level of detail for environments the player is typically running past. The channel is often in fact recordings of just letting the game run without moving the controller, as every now and then game notifications will pop up.

Harry Potter ASMR

I liked the Harry Potter series but even if you didn't, this channel does an excellent job of not only doing environmental mixes as Sleep Ambient does, but also creating shifts in those environments over time, such as with the "Dumbledore's Office" where presumbably he wanders to grab a book or in the Three Broomsticks where the noise will get louder as the pub fills up and quieter as guests leave. Of course, you only see a few minor shadows moving in and out of the pub.

I greatly appreciate the people who put these mixes together. For anyone with tinnitus or prone to easy distraction, the soundscapes provide an ignorable level of distraction and the right amount of audio buffer.

I would been in a massive debt to YouTube if they would set the commercials to ones that don’t involve people screaming right before or in these middle of these videos. 

BONUS TRACK

The Annotated Turing: A review of reading

Charles Petzold’s An Annotated Turing was a book I had been looking forward to reading for a while. I felt I basically knew the central tenants of Turing’s Universal Machine and the halting problem (a phrase not used by Turing), but I lacked an understanding how the ideas were built up and how they were implemented.

This happens a lot with computer science. Basic algorithmic processes are simple to explain and the general operations of computer might be outline-able by the intermediately skilled user, but where the rubber meets the road is glossed over. This gap from generalization to specifics is most evident in blogs / comment threads written by technical interviewers who find that their Magna Cum Laude CS graduate interviewee can’t program FizzBuzz.

For my part, I had watched a visibly pleasing YouTube clip on Turing’s halting problem, I knew the requirements of a Turing machine, and had even read through parts of Introduction to Functional Programming through Lambda Calculus, so I felt I would be pretty comfortable finally settling down with Turing’s paper “On Computable Numbers, With an Application to the Entscheidungsproblem” guided by Petzold.

As Petzold explains at the beginning of the book, Turing’s paper could be reassembled from Petzold’s annotated version. Petzold provides a fairly thorough, but hurried explanation of the math you’ll at least need to have heard of to continue with certain sections of the paper to build up to chapter 14 – “The Major Proof.”

And this is where I fall off, and my biggest take-away from the book occurs, albeit independent of the its subject matter.

In chapter 14, as Turing comes to the conclusion that the Entscheidungsproblem is impossible, I felt nothing. Throughout the book, I knew I was missing some concepts and that I could have spent more time with the unwieldy first-order logic equations that were presented, but that wasn’t the reason I didn’t respond with “Ah! Of course!” when Turing reached his conclusion.

Instead, it was because the entire time, I was focused on how the book could be building to the YouTube videos. And for a variety of reasons, it just wasn’t there. I kept looking and assuming that certain parts were clues to what I knew rather than simply listening to what Turing was saying in the moment.

Above, I said that there is a huge difference between general understanding and detailed understanding. While there is nothing wrong with former as it eventually leads to the details, but it was an error on my part to assume that general understanding was understanding, and I distracted myself by demanding that the specific meet the general somewhere.

It’s easy to hold onto the general understanding as something solid, but to move between different levels of detail requires some degree of abandonment.

It’s the difference between “knowing about” and “knowing” a topic, and Annotated helped me understand not so much that that difference existed, but that failing to incorporate that understanding in how you read or digest a new topic can block the shift from one place to another.

Nonetheless, despite my troubles, Annotated is a worthwhile read, even for a not so worthy reader.

Superintelligence

The most telling aspect of Superintelligence is the praise blurbs on the cover and back.

“Human civilisation is at stake” - Financial Times

“I highly recommend this book” - Bill Gates

I’m not sure what I’m supposed to feel, and it’s reflected in the general problems with the arguments in Superintelligence. Reading the book, you can quickly move from terrified by an idea to saying “huh, maybe” within the span of minutes. 

Superintelligence’s basic premise is that artificial intelligence may someday reach a point to be beyond human intelligence and most importantly beyond human control. What if this AI decides that humans are not necessary, a threat or composed reusable atoms it needs for its goals?

The author, Nick Bostrom of Oxford University’s Future of Humanities Institute, leads the reader toward the conclusion that this is indeed a very likely situation, whether through malice or ignorance of human value on behalf of this AI.

Bostrom’s chief concern is the possibility of constraining a superintelligent AI at least until we can properly trust that its activities would benefit mankind. It is the problem that is the most vague among many others: superintelligence’s motivations towards self-preservation, its ability to possibly control the world, and its ability to choose and refine goals. While all the issues are argued as inevitable given enough time, it is the “control problem” that can determine how destructive these other issues become. 

It is at this point that a further blurb about the book is necessary: “[Superintelligence] has, in places, the air of theology: great edifices of theory built on a tiny foundation of data.”

From The Telegraph, the review also argues that the book is a philosophical treatise and not a popular science book, which I would agree, and in most reactions I had when describing the book to friends, they tended to respond philosophically rather than from a technical perspective.

It is with this perspective that, Superintelligence applies a similar approach as did Daniel Dennett in Darwin’s Dangerous Idea - given enough time, anything is possible regardless of the mechanics.

The simple response is “Well, what if there isn’t enough time?”

This doesn’t suffice for Dennett’s argument (“The universe is this old, we see the complexity we do, therefore, enough time is at least this long, and we have no other data point to consider”), but it was a popular response to Superintelligence. I personally heard -  “We’ll kill each other before then.” - and - “We aren’t smart enough to do it.”

Both of these arguments, reflect the atheistic version of the faith The Telegraph suggests the reader needs, but Bostrom holds to throughout the book: given enough time, superintelligence will be all powerful and all knowing - near god-like except that it can move beyond the physical.

However, much like an atheist can withdraw value from the Gospels, even the unconvinced can remember a few sentences from Bostrom and take pause. Bostrom’s central concern is how to control technology, particularly technology that we and nobody else knows how it’s made. Moreover, this should be a concern even when programmers know how a program works, but the using public does not. It is the same concern that makes people assume nonchalantly that the government is already tracking their location and their information.

Even without superintelligence, the current conversation about technology is a shrug and admittance that that’s how it is. Bostrom leans heavily toward pacing ourselves rather than end up dead. Given our current acceptance of the undesirable in our iPhones, shouldn’t we also wonder if we should pace ourselves or pause and examine our current progress in detail rather than excitedly waiting for a new product?

This isn’t to say we should stop technological progress. Instead, alongside innovation, there needs to be analysis of every step.

Every wonder what’s in your OS’s source code? Could it be tracking you and logging every keystroke sent off to some database? What if all software was open source? Wouldn’t this solve that problem?

This isn’t a technological problem is it? The question of open source for everything is an economic and industrial question, though it may ultimately be solved by technology.

Consider that, in the last twenty years, restaurants and food producers have tied themselves not to simply producing food to eat, but the type and intent of the food they produce - is it sustainable? Is it safe for the environment? Does it reflect the locale? I imagine not too many people would be surprised to see a credo on a menu alongside the salads in this day.

What about software? Are we only to expect that kind of commitment from ex-hippies and brilliant libertarian hackers? What about Apple, Google and Microsoft? It’s an ideal certainly - once you show the Google search engine algorithm, then what’s left than for a competitor to copy it? I don’t have answer for this, but understand there is an exchange - Google keeps their competitive edge, but they also keep all my information.

We are already being victimized by unknown technology and we shrug or make some snarky comment. Even though Superintelligence argues that certain technology is inevitable, we can form how it is made.

Wouldn’t it be great if we started practicing that now?

Bare Metal on Raspberry Pi B+: Part One

One of the primary reasons I purchased my Raspberry Pi B+ is I thought that there was probably a way to do bare metal programming on it. “Bare Metal” is a general term for writing code that directly talks to a computer’s processor, essentially low level programming.

Fortunately, a professor at Cambridge University released a series called “Baking Pi,” taking students to a basic interactive operating system, all built in assembly in the Raspberry Pi B with some minor start up help to get the OS loaded.

You’ll note this post is about the B+, which does not perfectly mirror the B in the tutorial. I’ve shared my code that converts correctly the B to B+ address changes if you are looking for a quick answer on how to do the introductory exercises.

However, I have a little more advice, both generally when introducing yourself to bare metal (somebody is gonna nail me for liberally using that term) and in the conversion to the B+:

Read Documentation on ARM

The explanation for the assembly commands provided by Cambridge U is a nice intro, but they do skim over a couple minor details just to get the project rolling. While I totally understand the intent, I would highly recommend pausing at each assembly command and read the documentation at the ARM Information Center. The information is still pretty sparse here as well, but it’s a good way to wade into the ARM documentation tome in a directed manner.

Read the Peripheral Manual

Robert Mullins, the “Baking” author, repeatedly mentions he knows where the locations of the GPIO RAM locations are because he read the manual. Unfortunately, in my case, I was using the B+, which has a different memory map than the B. So I had to look up where the actual locations for the GPIO memory.

Fortunately, a Raspberry Pi forum pointed me in the right direction, a link I’ve lost, but nonetheless I was still forced to go into the manual. This turned out to be very helpful as once I got my head around how the guide outlined RAM and its functions, it actually started to make sense, though with plenty of “Okay, if you say so.”

Similar to my recommendation on ARM assembly, even if you’re using the B, double-check everything that Mullins says just for the experience of finding it yourself. Sorta like how librarians used to force you to use the Dewey Decimal Cards.

Build a pipeline

Not surprisingly, doing bare metal required a lot of trial and error, particularly since I couldn’t get error reports. I know there are ways to use a JTAG to deploy and monitor boot, but this is actually something I don’t have very much experience with or equipment to support. Nope, instead I just pulled that flash drive in and out of my Mac every time.

Save yourself 5 seconds every minute - write yourself a script that runs build commands and properly ejects your disk and provides error escape codes in case you made a typo. It’s worth it after a couple hours of work. I have included a sample script in my repo.

Create Status Functions

After the first lesson in Cambridge’s OK sequence, you’re able to turn on the Pi’s ACT light. The second lesson explains how branching works to create functions. With this little bit of information, you can create a pseudo breakpoint function to at least observe where in the code you’ve reached or test values for equality (if you read up on your ARM codes). It’s a bit of a nuclear option, but it’s the only feedback you can get without a more advanced setup.

Start Building your System Library Immediately

Right alongside my last point, start creating a system library and organizing your code around that for basic things like the breakpoint status methods, pointing to the GPIO region and so forth. While you can follow along with Mullins’ examples, you never know how far the project will take you and it’d be nice to start setting good refactored APIs sooner rather than later.

Furthermore, it’s a nice test to check if you understood everything that was discussed if you can at the very least rename and move around code while maintaining its functionality. Plus you can use cool names like “SYS_OPS_SOMETHING.”

Read Elements of Computing Systems

This is an amazing introduction to systems at this level, without a lot of concern for peripherals, having to reload flash cards, and toy with stack arithmetic when all you can see is a little light. In fact, the tools provided by the authors of the book allow you to actually see the various memory locations in both RAM and the processor. Though not what you need to get really hands on, the conceptual framework was a great resource as I dove into Cambridge’s lessons, particularly when stack organization standards arose.

 

Overall, it’s a fantastic series and the albeit small process of converting to the B+ forced me to examine what was actually happening and lean on some more general coding conventions and knowledge.

 

Crypto: an oral history of numbers and hearing the same damn thing

Not my key

“What crypto policy should this country have? Codes that are breakable or not?”

RSA encryption co-inventor, Ron Rivest’s absolutely not hypothetical question in 1992 was all the more prescient this past year as the US government began to press Apple to begin decrypting the company’s iPhones for the purposes of national security. It was an all too familiar back-and-forth between social advocates, technology experts and the government. Rivest’s question still lingers: does the public have the right to secure codes?

My personal opinion is yes. If you disagree, the reality is that that’s too bad.

Steven Levy’s Crypto is an oral history more detailed than my barstool argument. As chronicled in the book, the general situation over the last half century plus is that governments, in particular the NSA, have had the monopoly on code breaking and encryption, so much so that for many behind “The Triple Fence,” it appeared to be an absolute waste of time to study cryptography - why waste your time since you weren’t gonna really need codes, and even if you made one, the NSA or another government had far more resources to crack it.

Despite their omnipotence, there was one problem that governments had yet to solve. Regardless of how good your or any government’s code was, at some point, you still needed to hand off the key to the recipient. This key could be stolen of course. In the 60s and 70s, this bothered a young mathematics student, Whit Diffie, so much, that he spent years on the another critical question - how do you solve the key exchange problem?

This question creates the dividing path in Crypto’s mostly oral history of cryptography. Governments had lots of ways to sneak keys around, and they had plenty of ways to generate new codes with new keys quickly and know exactly, hopefully, which messages may have been comprised. For the average person, these resources were clearly outmatched. So there was a reason for the above average person to study cryptography - defeating the key exchange issue so you could trust getting and receiving messages.

This lead Diffie and an electrical engineering professor, Martin Hellman, to create public key cryptography. It’s what everybody now uses on the web, but more importantly had been a vague spectre to the NSA years. The idea is brilliantly simple and generates encrypted messages where key interception is no longer an issue, because the key that’s transmitted, the public key, everyone is already aware of.

Wikipedia's simple explanation

Following Diffie and Hellman’s conceptual breakthrough, Rivest and two others combined the public key idea with one way functions, that is encrypting something so that you can’t reverse the process, and then the insanely powerful but easy to use encryption system became a usable rather the conceptual system.

A third of the way into Crypto, the clash between these unlikely and unexpecting crypto-heroes begins to unfold. As the creators of this technology attempt to cash in and lead the way for email, ecommerce and Bitcoin, others attempt to give it away as a moral principle, meanwhile the US government attempts to stop them all.

It’s a brilliant story, built on eight years of interviews where surprisingly was the same arguments from decades ago echo from the past to our current political climate.

Though, he never says it, Levy’s focuses in questions and stories that greatly resemble Daniel Dennett’s idea that Darwin’s evolutionary theories were a form of universal acid. This is acid that is so strong that it can burn through anything, even its containers, so to create it, means that it will inevitably burn through the earth.

Regardless of any mistakes Darwin may have made, the idea still holds, and Levy’s history demonstrates this exact same point - once public key cryptography and one-way functions were out, it didn’t matter if the government tried to reduce the key size to how ever many bits it felt it could easily crack in the name of national security. The idea was enough that someone with the proper motivation in the world could simple just go ahead and create something more powerful at 1024-bytes once the computing power allowed it.

Likewise, today, with the US government pressuring Apple, it means very little. Most folks, particularly those who engage in crime, know about burner phones, and it’s not as if data centers and smartphone manufacturers only exist in the US. If you’re running a terrorist cell, just don’t use your iPhone for crime. The idea of encryption is already out there. To this exact point - Levy’s epilogue is about a young post-WWII British government cryptographer who invented, and was prevented from ever speaking about, public key cryptography.

Levy makes a poignant argument in his closing pages that encryption created by public key cryptography and the RSA algorithm were ultimately beneficial, and attempts to always have a backdoor made consumers not trust, and therefore not use, US products, hurting software and commerce generally.

Though written in 2001, Crypto’s history is acutely relevant to our present situation and baseline to anyone who wants to move from the barstool to coffee shop at the very least.

Levy’s book is also one of the better historical works on computing history that took great pains to find the original people involved and interview them in depth. Often computing history is surrounded by the hype of wealth (ahem, Social Network) rather the intrinsic value of the technology. When cryptography and the next hot topic become so personal and integrated in all aspects of our lives, books like Levy’s are all the more critical to generating and informed discussion and a way to find a path forward, instead of rehashing the same tired arguments

Code of the Week - April 5, 2016

Found in a legacy project, this line of PHP was intended to mirror DOM indentation in a Drupal theme function. I appreciate that the developer (who, I'm not picking on. I've certainly done worse) was trying to maintain a sense of layout, however, there's roughly a hundred lines of code above this mixing tons of other business logic, meta data aggregation, database queries, and so on. 

Therefore, you're not going to get any sense of the DOM from looking at this function. Instead, you'll get a really funky looking append in the midst of all this chaos. 

A function with these attritubes and a line like this just screams separation of concerns. Needless to say, it was quickly refactored so I could just read the thing. 

You Must Beware of Shadows

The Eighth Commandment of The Little Schemer - use help functions to abstract from representations - is as obvious as most of the Ten Commandments. Of course you would use other functions to support or create abstraction.

To make the case more clearly, the authors attempt to represent primitive numbers with collections of empty lists ( e.g. (()) for one, (()()) for two). This is a new level of parenthetical obnoxiousness, and for a while, the reader may think - “Are they going to do this for the rest of the book?” - because the authors then go on to demonstrate that a lot of the API used thus far in the book, works for this type of representation. But, then there’s lat?

Testing reveals that lat? doesn’t work as expected with such an abstraction. The chapter concludes with the two speakers exchanging:

Is that bad?
You must beware of shadows

This isn’t a very common thing to read in an instructional programming book, hell, you’d be blown away to see this in a cookbook. 

Searching around online, I couldn’t find too many people fleshing out a very thorough explanation other than a couple chat groups and most users said “I guess they mean...but who cares?” or instead just complained about the book’s overall format. I know how I feel.

The phrase is out of place even in The Little Schemer, considering most chapters end with a recommendation to eat sweets. Nonetheless, it is a perfect for what the authors want the reader to consider.

Going back to the Eighth Commandment above, it’s a considerable summation of coding cleaning activities programmers can read up on in books such as Code Complete and Refactoring.

But why end the chapter like this and call the chapter "Shadows"?

It’s obviously a parental-level warning in the family of “Keep your head up.” While a programmer can abstract a lot, the taller an abstraction is built, the greater shadow it may be casting over operations that are still necessary or rarely necessary, which could even be more painful (read the first chapter of Release It!). The shadows cast by the edifice of your abstraction leads ultimately to bugs or worse a crack in the abstraction that can’t be patched.

It’s more delicate and literary warning than was given by Joel Spolsky about frameworks. Spolsky, as usual, is more confrontational, and aside from the possibility of him yelling at me about this topics, Schemers’ warning sticks better. It’s like a cautioning given by an old woman near a dark wood.

However, these shadows are not cast by some creepy tree, but by our own code. It’s ultimately an admonishment to test, check your abstractions in places you wouldn’t necessarily use them and be just as thorough as the creators of the primary methods of your language. And, of course, be afraid.

Brilliance and Thoroughness

I've made a lot of mistakes programming. Naturally, this can start to give you the sense that you may not be as sharp as you had hoped. Typically, I've brushed it off and told myself something about my hard work or persistence. Maybe to spare my ego or to at least give myself a feeling that I had value in the job market place. Regardless, they ultimately didn't make me feel more confident. It's because almost all of my mistakes had nothing to do with being sharp, insightful or clever. 

The classics of programming as a craft - The Pragmatic Programmer, Clean Code, Code Complete - generally boil down to one statement “don’t be lazy.” When I read this statement and all the many ways each author repeats it, and then compare it to my track record, I really fall short of the mark. Not because I am comparing myself to some excellent and well-known professionals; it’s because I trail in one defining quality of professional work -  thoroughness.

To give a simple example of thoroughness - are you the sort of person who moves their furniture when vacuuming or sweeping? Did you wipe down the counter after cooking? Feel your plates after hand washing to make sure they’re not still greasy?

Now, I wouldn’t call not doing these things “lazy,” but thoroughness is requisite skill to not be considered so.

For a long time, both in my kitchen and in code, I believed I could skirt by on tiny details. No need to to check that site in IE, I assume it’s fine. Don’t worry about thinking up a few more test cases, most people will follow the same usage pattern. No one really reads commit messages, so no need to reread them just to be sure they’re clear. I'll rename that variable later. 

None of these are errors or sins, but they are much like a kitchen counter that's never properly wiped down - crumbs congregate under the toaster, there's always a little moisture around the edge of the sink, and when you turn on a stove burner, the room starts to smell like burning carbon. 

It’s not lazy, it’s presumptive hope that everything will work out so you can save a few minutes. Unfortunately, it’s exactly what John Wooden meant when he said - If you don’t have time to do it right, when will you have time to do it again?

The sad thing is, I always assumed that brilliant programmers could just write stuff once and it would be exceptional and work every time. Like the story of Da Vinci drawing a perfect circle to demonstrate his capabilities. Had he erased a couple points and said, “Hold on,” we probably wouldn’t be retelling a most likely made up story. I believed, subconsciously, I needed some time to pass before I would get to that point.

I’m reminded a cookbook by famous molecular gastronomy chef Ferran Adria on his restaurant called El Bulli. In this expensive cinder block of a book, somewhere in the middle, is a series of photos of the staff at El Bulli, cleaning the kitchen, Adria included. The people are wiping down every surface in that kitchen, including the legs of tables, and staring with the intensity that they would their dishes as they plate.

These chefs are brilliant, but to be so, they also have to be thorough about even mundane tasks. Cleaning isn’t beneath them, it’s an essential element to being some of the best cooks in the world. Furthermore, the reason they are at the level, most likely derives from this basic attention to detail, something the customers will never see. Adria's inclusion in these photos demonstrates that this process never goes away. Regardless of how well you cook, how inspired or efficient you are, the counter will still need to be wiped down. 

Back in code, I no longer hope to be the super hacker, smarmy, always-gets-it-right-the-first-time programmer. Instead, I hope to start by cleaning well and see where that takes me.

 

The Little Schemer

Thanks to Cool Pacific

The Little Schemer spells out in its introduction that the book is about recursion. Generally in programming circles, Schemer is known as a great book for learning about Scheme/Lisp (see Paul Graham for all the praise of Lisp you’ll need) or functional programming. While true, in my recent reading of the book about 20 years after its first edition, the popularity of Schemer is more important in how it presents these ideas then their practicality in code.

Topical Focus

I’ve said before that there needs to be more topically focused short consumable books for programmers, in contrast to giant tomes. It’s rarely that developers need an immense index of a language’s every aspect or need to know every algorithm, but instead they need specific cross-language experiences - algorithmic groups, object-oriented programming, and, as with Schemer, recursion, clocking in at under 190 pages.

The early Lisp families with their annoying parentheses can quickly cause someone new to the language to either give up or invent Haskell. But as this book proves, Scheme’s syntax isn’t the point. The topic is recursion and that’s it. Use a good IDE to help with your parentheses and move on and be done quickly.

Dialogue

Really, Schemer is a Socratic discourse between a very sharp neophyte and his guide. A very short question and answer format is maintained throughout, except for the sprinkling of recursive commandants and a handful of asides.

The format is a breather from syntactically dense books (here’s how you make variables, here’s you make arrays, classes, functions...250 pages later: You know X new JS framework), academically dense books and the “Let’s program to the Extreme with bad jokes” books.

Using this format, Schemer is as nuanced as it comes, often annoyingly so, as the authors walk through recursive functions a logic decision at a time. However, as laborious as this may be, it’s best to listen to the author’s recommendation to not rush your reading no more than you would a good conversation to experience this unique approach to programming pedagogy.

The Why of the How

A large intent of this Socratic method is to really get down to why a person makes the choices they do, which is a lot more interesting and demonstrates expertise. Take an example of asking an interviewee programmer to write out the Shell Sort on a whiteboard versus to have that same person walk you through a short array verbally using that algorithm, meanwhile explaining why it’s more efficient than insertion sort.

In a time when a common complaint is that the rush of new frameworks and languages is overwhelming and something employers except programmers to keep up with, the main question is how do I write something rather than how did the language arise in the way it did.

For its format and focus, The Little Schemer transcends the modern sense of programming instruction. It won’t be taught in a code bootcamp, because in the Schemer’s universe, coding bootcamps don’t exist because you’re not in a hurry to get a job, because there is no job to be had. Only understanding.

 

Pages