Superintelligence

The most telling aspect of Superintelligence is the praise blurbs on the cover and back.

“Human civilisation is at stake” - Financial Times

“I highly recommend this book” - Bill Gates

I’m not sure what I’m supposed to feel, and it’s reflected in the general problems with the arguments in Superintelligence. Reading the book, you can quickly move from terrified by an idea to saying “huh, maybe” within the span of minutes. 

Superintelligence’s basic premise is that artificial intelligence may someday reach a point to be beyond human intelligence and most importantly beyond human control. What if this AI decides that humans are not necessary, a threat or composed reusable atoms it needs for its goals?

The author, Nick Bostrom of Oxford University’s Future of Humanities Institute, leads the reader toward the conclusion that this is indeed a very likely situation, whether through malice or ignorance of human value on behalf of this AI.

Bostrom’s chief concern is the possibility of constraining a superintelligent AI at least until we can properly trust that its activities would benefit mankind. It is the problem that is the most vague among many others: superintelligence’s motivations towards self-preservation, its ability to possibly control the world, and its ability to choose and refine goals. While all the issues are argued as inevitable given enough time, it is the “control problem” that can determine how destructive these other issues become. 

It is at this point that a further blurb about the book is necessary: “[Superintelligence] has, in places, the air of theology: great edifices of theory built on a tiny foundation of data.”

From The Telegraph, the review also argues that the book is a philosophical treatise and not a popular science book, which I would agree, and in most reactions I had when describing the book to friends, they tended to respond philosophically rather than from a technical perspective.

It is with this perspective that, Superintelligence applies a similar approach as did Daniel Dennett in Darwin’s Dangerous Idea - given enough time, anything is possible regardless of the mechanics.

The simple response is “Well, what if there isn’t enough time?”

This doesn’t suffice for Dennett’s argument (“The universe is this old, we see the complexity we do, therefore, enough time is at least this long, and we have no other data point to consider”), but it was a popular response to Superintelligence. I personally heard -  “We’ll kill each other before then.” - and - “We aren’t smart enough to do it.”

Both of these arguments, reflect the atheistic version of the faith The Telegraph suggests the reader needs, but Bostrom holds to throughout the book: given enough time, superintelligence will be all powerful and all knowing - near god-like except that it can move beyond the physical.

However, much like an atheist can withdraw value from the Gospels, even the unconvinced can remember a few sentences from Bostrom and take pause. Bostrom’s central concern is how to control technology, particularly technology that we and nobody else knows how it’s made. Moreover, this should be a concern even when programmers know how a program works, but the using public does not. It is the same concern that makes people assume nonchalantly that the government is already tracking their location and their information.

Even without superintelligence, the current conversation about technology is a shrug and admittance that that’s how it is. Bostrom leans heavily toward pacing ourselves rather than end up dead. Given our current acceptance of the undesirable in our iPhones, shouldn’t we also wonder if we should pace ourselves or pause and examine our current progress in detail rather than excitedly waiting for a new product?

This isn’t to say we should stop technological progress. Instead, alongside innovation, there needs to be analysis of every step.

Every wonder what’s in your OS’s source code? Could it be tracking you and logging every keystroke sent off to some database? What if all software was open source? Wouldn’t this solve that problem?

This isn’t a technological problem is it? The question of open source for everything is an economic and industrial question, though it may ultimately be solved by technology.

Consider that, in the last twenty years, restaurants and food producers have tied themselves not to simply producing food to eat, but the type and intent of the food they produce - is it sustainable? Is it safe for the environment? Does it reflect the locale? I imagine not too many people would be surprised to see a credo on a menu alongside the salads in this day.

What about software? Are we only to expect that kind of commitment from ex-hippies and brilliant libertarian hackers? What about Apple, Google and Microsoft? It’s an ideal certainly - once you show the Google search engine algorithm, then what’s left than for a competitor to copy it? I don’t have answer for this, but understand there is an exchange - Google keeps their competitive edge, but they also keep all my information.

We are already being victimized by unknown technology and we shrug or make some snarky comment. Even though Superintelligence argues that certain technology is inevitable, we can form how it is made.

Wouldn’t it be great if we started practicing that now?

Bare Metal on Raspberry Pi B+: Part One

One of the primary reasons I purchased my Raspberry Pi B+ is I thought that there was probably a way to do bare metal programming on it. “Bare Metal” is a general term for writing code that directly talks to a computer’s processor, essentially low level programming.

Fortunately, a professor at Cambridge University released a series called “Baking Pi,” taking students to a basic interactive operating system, all built in assembly in the Raspberry Pi B with some minor start up help to get the OS loaded.

You’ll note this post is about the B+, which does not perfectly mirror the B in the tutorial. I’ve shared my code that converts correctly the B to B+ address changes if you are looking for a quick answer on how to do the introductory exercises.

However, I have a little more advice, both generally when introducing yourself to bare metal (somebody is gonna nail me for liberally using that term) and in the conversion to the B+:

Read Documentation on ARM

The explanation for the assembly commands provided by Cambridge U is a nice intro, but they do skim over a couple minor details just to get the project rolling. While I totally understand the intent, I would highly recommend pausing at each assembly command and read the documentation at the ARM Information Center. The information is still pretty sparse here as well, but it’s a good way to wade into the ARM documentation tome in a directed manner.

Read the Peripheral Manual

Robert Mullins, the “Baking” author, repeatedly mentions he knows where the locations of the GPIO RAM locations are because he read the manual. Unfortunately, in my case, I was using the B+, which has a different memory map than the B. So I had to look up where the actual locations for the GPIO memory.

Fortunately, a Raspberry Pi forum pointed me in the right direction, a link I’ve lost, but nonetheless I was still forced to go into the manual. This turned out to be very helpful as once I got my head around how the guide outlined RAM and its functions, it actually started to make sense, though with plenty of “Okay, if you say so.”

Similar to my recommendation on ARM assembly, even if you’re using the B, double-check everything that Mullins says just for the experience of finding it yourself. Sorta like how librarians used to force you to use the Dewey Decimal Cards.

Build a pipeline

Not surprisingly, doing bare metal required a lot of trial and error, particularly since I couldn’t get error reports. I know there are ways to use a JTAG to deploy and monitor boot, but this is actually something I don’t have very much experience with or equipment to support. Nope, instead I just pulled that flash drive in and out of my Mac every time.

Save yourself 5 seconds every minute - write yourself a script that runs build commands and properly ejects your disk and provides error escape codes in case you made a typo. It’s worth it after a couple hours of work. I have included a sample script in my repo.

Create Status Functions

After the first lesson in Cambridge’s OK sequence, you’re able to turn on the Pi’s ACT light. The second lesson explains how branching works to create functions. With this little bit of information, you can create a pseudo breakpoint function to at least observe where in the code you’ve reached or test values for equality (if you read up on your ARM codes). It’s a bit of a nuclear option, but it’s the only feedback you can get without a more advanced setup.

Start Building your System Library Immediately

Right alongside my last point, start creating a system library and organizing your code around that for basic things like the breakpoint status methods, pointing to the GPIO region and so forth. While you can follow along with Mullins’ examples, you never know how far the project will take you and it’d be nice to start setting good refactored APIs sooner rather than later.

Furthermore, it’s a nice test to check if you understood everything that was discussed if you can at the very least rename and move around code while maintaining its functionality. Plus you can use cool names like “SYS_OPS_SOMETHING.”

Read Elements of Computing Systems

This is an amazing introduction to systems at this level, without a lot of concern for peripherals, having to reload flash cards, and toy with stack arithmetic when all you can see is a little light. In fact, the tools provided by the authors of the book allow you to actually see the various memory locations in both RAM and the processor. Though not what you need to get really hands on, the conceptual framework was a great resource as I dove into Cambridge’s lessons, particularly when stack organization standards arose.

 

Overall, it’s a fantastic series and the albeit small process of converting to the B+ forced me to examine what was actually happening and lean on some more general coding conventions and knowledge.

 

Crypto: an oral history of numbers and hearing the same damn thing

Not my key

“What crypto policy should this country have? Codes that are breakable or not?”

RSA encryption co-inventor, Ron Rivest’s absolutely not hypothetical question in 1992 was all the more prescient this past year as the US government began to press Apple to begin decrypting the company’s iPhones for the purposes of national security. It was an all too familiar back-and-forth between social advocates, technology experts and the government. Rivest’s question still lingers: does the public have the right to secure codes?

My personal opinion is yes. If you disagree, the reality is that that’s too bad.

Steven Levy’s Crypto is an oral history more detailed than my barstool argument. As chronicled in the book, the general situation over the last half century plus is that governments, in particular the NSA, have had the monopoly on code breaking and encryption, so much so that for many behind “The Triple Fence,” it appeared to be an absolute waste of time to study cryptography - why waste your time since you weren’t gonna really need codes, and even if you made one, the NSA or another government had far more resources to crack it.

Despite their omnipotence, there was one problem that governments had yet to solve. Regardless of how good your or any government’s code was, at some point, you still needed to hand off the key to the recipient. This key could be stolen of course. In the 60s and 70s, this bothered a young mathematics student, Whit Diffie, so much, that he spent years on the another critical question - how do you solve the key exchange problem?

This question creates the dividing path in Crypto’s mostly oral history of cryptography. Governments had lots of ways to sneak keys around, and they had plenty of ways to generate new codes with new keys quickly and know exactly, hopefully, which messages may have been comprised. For the average person, these resources were clearly outmatched. So there was a reason for the above average person to study cryptography - defeating the key exchange issue so you could trust getting and receiving messages.

This lead Diffie and an electrical engineering professor, Martin Hellman, to create public key cryptography. It’s what everybody now uses on the web, but more importantly had been a vague spectre to the NSA years. The idea is brilliantly simple and generates encrypted messages where key interception is no longer an issue, because the key that’s transmitted, the public key, everyone is already aware of.

Wikipedia's simple explanation

Following Diffie and Hellman’s conceptual breakthrough, Rivest and two others combined the public key idea with one way functions, that is encrypting something so that you can’t reverse the process, and then the insanely powerful but easy to use encryption system became a usable rather the conceptual system.

A third of the way into Crypto, the clash between these unlikely and unexpecting crypto-heroes begins to unfold. As the creators of this technology attempt to cash in and lead the way for email, ecommerce and Bitcoin, others attempt to give it away as a moral principle, meanwhile the US government attempts to stop them all.

It’s a brilliant story, built on eight years of interviews where surprisingly was the same arguments from decades ago echo from the past to our current political climate.

Though, he never says it, Levy’s focuses in questions and stories that greatly resemble Daniel Dennett’s idea that Darwin’s evolutionary theories were a form of universal acid. This is acid that is so strong that it can burn through anything, even its containers, so to create it, means that it will inevitably burn through the earth.

Regardless of any mistakes Darwin may have made, the idea still holds, and Levy’s history demonstrates this exact same point - once public key cryptography and one-way functions were out, it didn’t matter if the government tried to reduce the key size to how ever many bits it felt it could easily crack in the name of national security. The idea was enough that someone with the proper motivation in the world could simple just go ahead and create something more powerful at 1024-bytes once the computing power allowed it.

Likewise, today, with the US government pressuring Apple, it means very little. Most folks, particularly those who engage in crime, know about burner phones, and it’s not as if data centers and smartphone manufacturers only exist in the US. If you’re running a terrorist cell, just don’t use your iPhone for crime. The idea of encryption is already out there. To this exact point - Levy’s epilogue is about a young post-WWII British government cryptographer who invented, and was prevented from ever speaking about, public key cryptography.

Levy makes a poignant argument in his closing pages that encryption created by public key cryptography and the RSA algorithm were ultimately beneficial, and attempts to always have a backdoor made consumers not trust, and therefore not use, US products, hurting software and commerce generally.

Though written in 2001, Crypto’s history is acutely relevant to our present situation and baseline to anyone who wants to move from the barstool to coffee shop at the very least.

Levy’s book is also one of the better historical works on computing history that took great pains to find the original people involved and interview them in depth. Often computing history is surrounded by the hype of wealth (ahem, Social Network) rather the intrinsic value of the technology. When cryptography and the next hot topic become so personal and integrated in all aspects of our lives, books like Levy’s are all the more critical to generating and informed discussion and a way to find a path forward, instead of rehashing the same tired arguments

Code of the Week - April 5, 2016

Found in a legacy project, this line of PHP was intended to mirror DOM indentation in a Drupal theme function. I appreciate that the developer (who, I'm not picking on. I've certainly done worse) was trying to maintain a sense of layout, however, there's roughly a hundred lines of code above this mixing tons of other business logic, meta data aggregation, database queries, and so on. 

Therefore, you're not going to get any sense of the DOM from looking at this function. Instead, you'll get a really funky looking append in the midst of all this chaos. 

A function with these attritubes and a line like this just screams separation of concerns. Needless to say, it was quickly refactored so I could just read the thing. 

You Must Beware of Shadows

The Eighth Commandment of The Little Schemer - use help functions to abstract from representations - is as obvious as most of the Ten Commandments. Of course you would use other functions to support or create abstraction.

To make the case more clearly, the authors attempt to represent primitive numbers with collections of empty lists ( e.g. (()) for one, (()()) for two). This is a new level of parenthetical obnoxiousness, and for a while, the reader may think - “Are they going to do this for the rest of the book?” - because the authors then go on to demonstrate that a lot of the API used thus far in the book, works for this type of representation. But, then there’s lat?

Testing reveals that lat? doesn’t work as expected with such an abstraction. The chapter concludes with the two speakers exchanging:

Is that bad?
You must beware of shadows

This isn’t a very common thing to read in an instructional programming book, hell, you’d be blown away to see this in a cookbook. 

Searching around online, I couldn’t find too many people fleshing out a very thorough explanation other than a couple chat groups and most users said “I guess they mean...but who cares?” or instead just complained about the book’s overall format. I know how I feel.

The phrase is out of place even in The Little Schemer, considering most chapters end with a recommendation to eat sweets. Nonetheless, it is a perfect for what the authors want the reader to consider.

Going back to the Eighth Commandment above, it’s a considerable summation of coding cleaning activities programmers can read up on in books such as Code Complete and Refactoring.

But why end the chapter like this and call the chapter "Shadows"?

It’s obviously a parental-level warning in the family of “Keep your head up.” While a programmer can abstract a lot, the taller an abstraction is built, the greater shadow it may be casting over operations that are still necessary or rarely necessary, which could even be more painful (read the first chapter of Release It!). The shadows cast by the edifice of your abstraction leads ultimately to bugs or worse a crack in the abstraction that can’t be patched.

It’s more delicate and literary warning than was given by Joel Spolsky about frameworks. Spolsky, as usual, is more confrontational, and aside from the possibility of him yelling at me about this topics, Schemers’ warning sticks better. It’s like a cautioning given by an old woman near a dark wood.

However, these shadows are not cast by some creepy tree, but by our own code. It’s ultimately an admonishment to test, check your abstractions in places you wouldn’t necessarily use them and be just as thorough as the creators of the primary methods of your language. And, of course, be afraid.

Brilliance and Thoroughness

I've made a lot of mistakes programming. Naturally, this can start to give you the sense that you may not be as sharp as you had hoped. Typically, I've brushed it off and told myself something about my hard work or persistence. Maybe to spare my ego or to at least give myself a feeling that I had value in the job market place. Regardless, they ultimately didn't make me feel more confident. It's because almost all of my mistakes had nothing to do with being sharp, insightful or clever. 

The classics of programming as a craft - The Pragmatic Programmer, Clean Code, Code Complete - generally boil down to one statement “don’t be lazy.” When I read this statement and all the many ways each author repeats it, and then compare it to my track record, I really fall short of the mark. Not because I am comparing myself to some excellent and well-known professionals; it’s because I trail in one defining quality of professional work -  thoroughness.

To give a simple example of thoroughness - are you the sort of person who moves their furniture when vacuuming or sweeping? Did you wipe down the counter after cooking? Feel your plates after hand washing to make sure they’re not still greasy?

Now, I wouldn’t call not doing these things “lazy,” but thoroughness is requisite skill to not be considered so.

For a long time, both in my kitchen and in code, I believed I could skirt by on tiny details. No need to to check that site in IE, I assume it’s fine. Don’t worry about thinking up a few more test cases, most people will follow the same usage pattern. No one really reads commit messages, so no need to reread them just to be sure they’re clear. I'll rename that variable later. 

None of these are errors or sins, but they are much like a kitchen counter that's never properly wiped down - crumbs congregate under the toaster, there's always a little moisture around the edge of the sink, and when you turn on a stove burner, the room starts to smell like burning carbon. 

It’s not lazy, it’s presumptive hope that everything will work out so you can save a few minutes. Unfortunately, it’s exactly what John Wooden meant when he said - If you don’t have time to do it right, when will you have time to do it again?

The sad thing is, I always assumed that brilliant programmers could just write stuff once and it would be exceptional and work every time. Like the story of Da Vinci drawing a perfect circle to demonstrate his capabilities. Had he erased a couple points and said, “Hold on,” we probably wouldn’t be retelling a most likely made up story. I believed, subconsciously, I needed some time to pass before I would get to that point.

I’m reminded a cookbook by famous molecular gastronomy chef Ferran Adria on his restaurant called El Bulli. In this expensive cinder block of a book, somewhere in the middle, is a series of photos of the staff at El Bulli, cleaning the kitchen, Adria included. The people are wiping down every surface in that kitchen, including the legs of tables, and staring with the intensity that they would their dishes as they plate.

These chefs are brilliant, but to be so, they also have to be thorough about even mundane tasks. Cleaning isn’t beneath them, it’s an essential element to being some of the best cooks in the world. Furthermore, the reason they are at the level, most likely derives from this basic attention to detail, something the customers will never see. Adria's inclusion in these photos demonstrates that this process never goes away. Regardless of how well you cook, how inspired or efficient you are, the counter will still need to be wiped down. 

Back in code, I no longer hope to be the super hacker, smarmy, always-gets-it-right-the-first-time programmer. Instead, I hope to start by cleaning well and see where that takes me.

 

The Little Schemer

Thanks to Cool Pacific

The Little Schemer spells out in its introduction that the book is about recursion. Generally in programming circles, Schemer is known as a great book for learning about Scheme/Lisp (see Paul Graham for all the praise of Lisp you’ll need) or functional programming. While true, in my recent reading of the book about 20 years after its first edition, the popularity of Schemer is more important in how it presents these ideas then their practicality in code.

Topical Focus

I’ve said before that there needs to be more topically focused short consumable books for programmers, in contrast to giant tomes. It’s rarely that developers need an immense index of a language’s every aspect or need to know every algorithm, but instead they need specific cross-language experiences - algorithmic groups, object-oriented programming, and, as with Schemer, recursion, clocking in at under 190 pages.

The early Lisp families with their annoying parentheses can quickly cause someone new to the language to either give up or invent Haskell. But as this book proves, Scheme’s syntax isn’t the point. The topic is recursion and that’s it. Use a good IDE to help with your parentheses and move on and be done quickly.

Dialogue

Really, Schemer is a Socratic discourse between a very sharp neophyte and his guide. A very short question and answer format is maintained throughout, except for the sprinkling of recursive commandants and a handful of asides.

The format is a breather from syntactically dense books (here’s how you make variables, here’s you make arrays, classes, functions...250 pages later: You know X new JS framework), academically dense books and the “Let’s program to the Extreme with bad jokes” books.

Using this format, Schemer is as nuanced as it comes, often annoyingly so, as the authors walk through recursive functions a logic decision at a time. However, as laborious as this may be, it’s best to listen to the author’s recommendation to not rush your reading no more than you would a good conversation to experience this unique approach to programming pedagogy.

The Why of the How

A large intent of this Socratic method is to really get down to why a person makes the choices they do, which is a lot more interesting and demonstrates expertise. Take an example of asking an interviewee programmer to write out the Shell Sort on a whiteboard versus to have that same person walk you through a short array verbally using that algorithm, meanwhile explaining why it’s more efficient than insertion sort.

In a time when a common complaint is that the rush of new frameworks and languages is overwhelming and something employers except programmers to keep up with, the main question is how do I write something rather than how did the language arise in the way it did.

For its format and focus, The Little Schemer transcends the modern sense of programming instruction. It won’t be taught in a code bootcamp, because in the Schemer’s universe, coding bootcamps don’t exist because you’re not in a hurry to get a job, because there is no job to be had. Only understanding.

 

Why I Taught Java

When I taught my first programming class this past year, I decided to teach the class in Java. My co-workers at the time gave me a bit of flack for this. I should be teaching Go or Python or something with a little better reputation.

Java was looked at as the C++ killer as I remember when it first came out. It go overhyped, is the bane of everyone’s desktop with near daily updates it seems, and it isn’t the prettiest language compared to more minimal programming styles.

So why bother teaching it? Particularly if it is semantically dense or harder to read?

Platform Accessibility

Java is free, it’s IDE’s are free, and they supposedly work on all systems. Sure, web programming has this quality too, but Java introduces application style development to students that’s different from the web and emphasizes further the use of object-oriented programming.

Furthermore, it provides an introduction to library packages and it gives students the chance to type something and see something happening that’s not the command line.

Compilation and Typing

These are two concepts that we could say aren’t really that big of deal, but they are conceptually necessary for students to grasp. Programs are typically compiled and by just introducing this requirement you can illuminate the critical concepts of a compiler, build pipelines and the different varieties of code that takes you all the way down to the ones and zeroes.

Likewise, typing, while not a huge issue for non-space critical systems like a laptop, inform students how memory allocation works, why people care, and how programs are designed with memory in mind.

It’s more semantics, but a class is supposed to teach concepts as well as practical programming. Those concepts make the practical lessons more effective.

It’s Hard

I believe there’s a certain level of work that’s necessary for a concept to stick with students. The more details, the more attention is demanded and the really important practices of programming tend to become ingrained in the process of memorizing minor details like the labels for different types.

Think about it - when you’re learning typing you are repeatedly thinking about variables and what they really mean, which then lays the groundwork for more complex data types like arrays and finally designing your own types with classes.

It’s Useful

However, they are ready to look at vendor specific-language like C# with a much greater baseline than previously because of Java. Likewise, other languages like Ruby, JavaScript and even something like Arduino programming are all going to look pretty basic and a relief, picking them up with a lot more speed than having to work the opposite way.

I’m sure this sounds like a dad telling shoveling snow builds character. But I’ll take it - when you need to teach novice students, programming too requires character building.

Intuitive vs Powerful

When I describe a certain piece of software, I'll often add to the end of my description that the tool in question is either intuitive or powerful

Simple comparisons like Photoshop vs Preview, Trello vs Jira, Excel vs Google Spreadsheets are demonstrative of these dual principles. Something that is intuitive is easy to pick up, everything to be used is present on the screen, and does a few things well and quickly. In contrast, something that is powerful, has loads of features, complex and detailed minutiae even to do a simple task like saving, and of course, gives you complete control of whatever your subject is.

Generally speaking, most of us don't buy a $600+ copy of Adobe After Effects (well, you can just use Creative Cloud...) if MovieMaker will suffice. We don't need those extra widgets, we only use it every so often and therefore, why pay that much.

After reading Chase Buckley's "The Future is Near: 13 Design Predictions for 2017", specifically prediction #9: Age-Responsive Design, I'd like to suggest software that exists in a specific user instance that lands on a spectrum between the intuitive and powerful. This is software that evolves its complexity based on your use, skill and demands. 

In this case, your software wouldn't necessary be like ever other user's experience. Take a simple example - if you only need the Sum formula in Excel, perhaps you only have a couple options in that formula. Or maybe your Kanban board in Jira looks and interacts a lot more like Trello, and doesn't need to know business value for task unless necessary. 

Naturally, this would mean that our software would be a lot bigger. However, taking Salesforce for example, already a massive system, it seems possible that a large majority of users would like a single interface that evolves with their business needs but begins much simpler. In fact, we know this is the case because of products like SalesforceIQ CRM. 

Unlike Salesforce's divided software products (at least on the front end), I'd suggest a UI that is singular but varied. 

Take a more basic example - this is a Drupal site, I've done Drupal development for years. I would love it if I could put a Drupal site in at least 3 modes - Tumblr-like, Wordpress-like, and finally full powered Drupal. The majority of your super admins in Drupal are typically looking for a strong product, but mainly use it for blogging. Yeah, I could create user roles and customize the experience - specialized content entry forms, menu navigation, and permissions grokking - but I'd love a switch and for the evolving UI to be a conceptual core to Drupal's interface.

Overall, I see three reasons that would push this concept into a mainstream software platform (most likely SaaS). First, a broad base of user types; second, an existing demand for complexity by a core base; third, strong stratification of user types. All you need from there is just to make the UI that works for your types, in the same way as Buckley describes in his article, we can clearly design for different age groups. 

Zeitgeist

Rummaging around YouTube the other day, I went and looked back over the Google Zeitgeist videos. 

I easily get swept up in these videos, as they are obviously designed to trigger your emotions from the previous year. However, comparing 2014's video (above) with 2011's - 

 - I'm a little disappointed. In 2010, Google's software is the navigator of the video and the experiences, by 2014, the software takes a back seat to treated images. 

When first seeing the 2010 video, I was personally very motivated to improve my software skills, as it was one of the first demonstrations on how much software could impact your life and more richly experience it. Google (or rather Google's advertising crew) show this experience through its dull interface, which our shared experience of using the software. 

Now, it appears that Google is hitting on those triggers I mentioned above and associating itself with anything that happens, which takes away from the software experience. 

Pages