Rant

It's not a perk


Photo via Geekwire

This week Microsoft announced that it would be building a cricket pitch in its envisioned Redmond campus redesign. Blog posts about the new design, excitedly pointed out that this is demonstrative of both Microsoft’s lagress and its changing workforce that includes more professionals from foreign countries where cricket is popular.

While certainly demonstrative that MS is appreciating the cultures of its staff, I have one question - why don’t you do that by letting them go home on time?

Though less noted since Ballmer's departure, MS is not known as some heart warming company that just wants to have to a good time and make exciting technology. Like most companies, they expect to get their money’s worth when they invest in their employees. I won’t knock MS for its payscale and benefits, and along with that it shouldn’t be any surprise that they would ask a lot of their staff.

So as the perks go up, anyone in technology should know something - they expect a return on investment. MS isn’t doing this for lagress - they are doing it as an advertisement of how great of company they are, how cool they are, and how it’s all about having fun. But all that comes at a price. If they actually cared about people just playing cricket they make a pitch open to the public in a park. They could even call it the Microsoft Office 2018 Park for all I care.

For years I have worked for companies, not all of them mind you, that thought that a once a month happy hour, including one free beer, or having Cokes in the fridge was compensation for fourteen extra hours a week. Even at $10 an hour minimum wage, a couple beers doesn’t pan out for anyone except the employer in that situation.

Instead of a happy hour on Friday, can I leave an hour early or just stay so I don’t have to stay late on Monday? Cause work still needs to get done.

The more perks I hear a company put in their company profile, the more concerned I am. Health insurance is ultimately a salary negotiation, but noting you have a sweet pool table and Jimmy Jon’s every Tuesday, tells me I’ll be expected to work late and not leave the office on Tuesday.

I’m not especially annoyed about cricket pitch in Redmond. I’m not losing sleep or thinking that corporate overlords won. I’m sure ultimately it will allow for intramural sports between the staffs and that it will also be a place of appreciation days for employees.

Ultimately however, while MS shouldn’t look like a fucking 1930s coal factory, we also shouldn't kid ourselves that “campuses” are there for our benefit. As Bill Gates in The Simpsons so well put - I didn’t get rich by signing a bunch of checks.

 

Driving in an Empty Room


Photo by Jaromír Kavan on Unsplash

Let’s set a scenario, the varieties of which you can color however you like - a self-driving car is going down a road next to a ravine and suddenly a group of school children jump out in front of the car. The car’s AI now has a decision to make - kill the children or kill the driver.

This hypothetical and its variants are intended to force us as a technological culture to confront the dangers and philosophical implications of our promised future. How can do we teach computers who lives and who dies when we as humans can’t even make that decision, even with perfect timing?

The problem is that the question is idiotic. I mean that in the kindest possible way - it’s the type of question regarding technology that attempts to sound profound, but is like deciding world politics using a Risk board.

Let’s take almost this exact situation without a self-driving car:

What happened? Did the driver have to make a decision between the kid’s life and their own? No - the driver did exactly what a self-driving vehicle would do. It braked.

Of course this was only possible by the revolutionary technological breakthroughs in braking systems. Had that been a truck that was fifteen years older and we would have seen a much darker end. There wouldn’t have even been an option to swerve out of the way regardless of how responsive the driver was.

The point here is that a seemingly dull piece of technology, braking systems, suddenly makes this apparently intense philosophical question pointless. While the tram problem or the example above might be fun to superficially warn about the dangers of technology, it is the associated pieces of technology that go along with self-driving cars that will resolve these problems - not solving philosophy.

If you looked at the first Wright Brothers plane and then imagined it flying at 30000 ft, would you question - is it worth the fast travel if some passengers will freeze to death? No. That’s stupid - you’d instead think, as history has shown, that you aren’t going to fly that high until you build safe environments in which that is no longer a question.

Take an example from the film Demolition Man. A silly movie to be sure, however, one piece of technology has always stood out to me - the styrofoam crash mechanism. For those who haven’t seen the movie, to Stallone’s surprise, when he crashes a car for the first time, the body completely transforms into styrofoam that pads his impact, removes shattering glass, and allows him to quickly break from from the wreckage.

It’s a fun visual gag, however, imagine our self driving car that has a similar mechanism. What, beyond inconvenience, does it matter if you fly off the ravine if your car turns into a safe bubble? There’s no philosophical problem there, only an insurance problem.

While the barstool criticism of self-driving cars ponders how to decide how best robots can serve man, keep in mind that a car is more than its driver. Even modern vehicles attempt to do their best to mitigate the harm to the driver and to those around them, regardless of the decisions of whoever is behind the wheel.

 

Zeitgeist

Rummaging around YouTube the other day, I went and looked back over the Google Zeitgeist videos. 

I easily get swept up in these videos, as they are obviously designed to trigger your emotions from the previous year. However, comparing 2014's video (above) with 2011's - 

 - I'm a little disappointed. In 2010, Google's software is the navigator of the video and the experiences, by 2014, the software takes a back seat to treated images. 

When first seeing the 2010 video, I was personally very motivated to improve my software skills, as it was one of the first demonstrations on how much software could impact your life and more richly experience it. Google (or rather Google's advertising crew) show this experience through its dull interface, which our shared experience of using the software. 

Now, it appears that Google is hitting on those triggers I mentioned above and associating itself with anything that happens, which takes away from the software experience. 

Holograms Interact with the Environment and Each Other

...But just not you. Better writers than I have already taken on the subject on the importance of physical feedback in game and digital interactive in Codename: Revolution, specifically targetting the failures of the Kinect versus the Wii. 

But personally, I can't see this demo:

And not think about a section of Errant Signal's review on Watch Dogs

Who started it?

In the final chapter of Steve Jobs, author Walter Isaacson assembles a bulleted list of Jobs’ greatest career contributions – the iPhone, Pixar, the whole App ecosystem – and of course the Macintosh, which Isaacson describes as: “[it] begat the home computer revolution and popularized the graphical user interface.”

Brian Bagnall, author of Commodore: a Company on the Edge, would beg to differ:

“[The] rosy picture of Apple starting the microcomputer industry crumbles under inspection,” he writes in the introduction to his book, “Commodore put computers in the hands of ordinary consumers.

Bagnall replied with his own list of what he believes were Commodore’s successes – first to sell a million computers, first major company to show a personal computer, and the first to release a multimedia computer.

In this way, Bagnall’s book begins as a direct challenge to Isaacson’s book, but aside from this opening salvo and fighting over turf, the books are excellent compliments to each other.

While Isaacson’s book is not strictly about Apple the company and Bagnall’s book is about the Commodore as a whole, both books have a lot in common - each author had a tremendous amount of access to the people involved in the history of these companies and saturated their book with quotes and firsthand sources, and both are very concerned with and detailed about late 70s and 80s computer history.

Commodore, who closed their doors in 1994, was the progenitor of the Commodore 64 and the manufacturer of the Amiga. Commodore has become emblematic of the shifting sands in the computer industry in the transition from the 80s to the 90s. Commodore had money, had technology, had vertical integration (much emphasized by Jobs in his biography) and yet, they couldn’t survive. While Bagnall has yet to publish a long-awaited follow-up to Commodore, The Future Was Here by Jimmy Maher, a book profiling of the Amiga, explains that Commodore just simply didn’t know what to do with their computers or how to market them as multimedia became a requirement of consumers. However, Jobs certainly knew how to market his machines.

Bagnall’s book makes a thorough and persuasive presentation of the contributions of Commodore and notably its technological heart Chuck Peddle. You can’t read Bagnall’s book and then look at Isaacson’s bullet point list of Job’s accomplishments without thinking “Well…..”

Unfortunately for Bagnall’s subject, history is written by the victors. Or rather, fans of the victors it seems.

This collision of these differing authors is central to a lot of what is currently being written about personal computing history. There’s a plethora of books available now that attempt to state who invented the computer, or who sparked the revolution (since it needs to be called that for whatever reason), or what influenced the revolution. Ultimately boiling down to the question - who was the first, such that they deserve credit?

Commodore, to its credit, is getting recognition - the documentary From Bedrooms to Billions, the aforementioned The Future Was Here, and the tremendous amount of retro computing interest has brought the company’s technology back into popular interest. Commodore for many of us, does sit as a time capsule of that era, perceiving its products unaltered by its present form, as is the case with Apple.

For myself and my near-peers, computing history is not just industry on its own merits, but also valuable because it is our personal history as well. We experienced that history, and many of the details filled in by books like Isaacson and Bagnall’s are enriching simply because we can say to ourselves “Oh yeah, I do remember that.” It helps us to explain and understanding the evolution of the digital world, one in which, we directly inhabit and is still quite new.

However, I don’t care who started it. Computer history is always a tremendous confluence of factors that assigning historical responsibility is a pointless task. Oh absolutely, individual people had tremendous impact or influence, but I can’t say Alan Turing started the computer revolution more than I can say Morris Tanenbaum did as well. In fact, it is the combining factors that makes the field of study so fascinating. The iPhone wouldn’t be half as interesting if we didn’t have social outlets such that we always had a reason to be engaged with and notified by our phones.

I commend Isaacson and Bagnall on their enormous efforts to document the history of personal computing and choosing such large subject matters and persons of importance. While I agree that Commodore has indeed been downplayed as Bagnall claims, the whole problem isn’t that Commodore isn’t getting its just desserts from history. Instead, the motivation to claim ownership of the digital age is going to be there regardless of history, especially when money, power and success are intertwined.

What is Wrong with Fighting Tournaments

The above video is funny and a pratical annoyance for anyone playing tournament fighting games. Personally, I always figured there were other fighters going on in the background, and to be fair, in the original Mortal Kombat you do see a bunch of random bodies lying around in the spike pit that look fairly fresh.

But there is one more element to most fighting games that I depise besides the fact that they are not technically tournaments - no one in the game treats it like a tournament. Watch the below series of cut scenes. Well, probably don't need to watch all of it. In theMortal Kombat (reboot) there are only a handful of times when people are actually fighting in matches. Aside from that, people just try to constantly randomly kill one another.

Well shit - if you can do that just fucking have everyone do a fatal melee. In the MK universe in particular, I know Shang Tsung's a bad guy, but if all he ever does is cheat and have people murdered than what the hell is the point of having a tournament. Just have people come to your evil island and throw them in the fucking spike pit or whatever.

And real quick - who the hell are those monks in the background?

My point is: the lacking of any semblance of a tournament really makes these tournament games feel like they are awkwardly trying to fit a larger story into what is essentially two people pummeling each other. AND that's exactly what they are doing. BUT - I would suggest that there may be actually interesting stories to be had in fighting tournament games, if the designers had characters that actually adhered to a tournament. Perhaps there are different conditions as you go through the tournament, just a thought.

Now, if you'll I have a creepy island to head to for a tic-tac-toe competition...

Code Fetishism is Bad

I hate this shit:

This shit:

Oh and this shit:

Granted - I get why these scenes are in television and movies. It takes a dry subject and makes it look a little more interesting than not at all. Furthermore, technology always carries a little bit of mystique with it. That's fair, and I don't think movies should stop doing it. I'd hate to have to watch some hacker character run Linux updates when they could have cool graphs and code moving on screen. 

But I can't get past it. There's three main issues I have with them and all of them undermine actually people becoming awesome superhackers: 

Confusion about knowledge to action. There is a lot of stuff you need to do to make code work, even hacked together code. There's a lot of support programs, technical manuals and such that you have to slog through. Hey, but we got a goal so it's worth the sacrifice. Sure, movies set up false expectations about the work involved in everything, but there's not even a training montage in computer movies. 

Focus on self satisfaction. It's like watching the Food Network - chopping potatoes is an exquisite experience, but satisfaction in the actual cooking, at least for me, doesn't come from softly smelling rosemary fresh cut, but from the assemblage of everything. In The Social Network in particular, the slow and overly dramatic drawing of the algorithm, while very cool, is not the point of what Zuckerberg is even doing in the scene. It's pointlessly indulgent, and therefore a waster of any decent coder's time to revel in such things. 

Confusion between what's in great use vs what's in actually reflective or meaningful. Maybe this is just a hole in the market for movies where people actually withdraw meaning from interacting with computers and code, but all of this flash, unrealistic flash at that, ego focused flash, visualizes excellence within computing and even hacking, it such a false and bullshit way, that it distorts what's substantive. 

Each one of those issues, ultimately deters people from computing, as they completely misdirect its value and confuse where you find meaning therein. It's a false advertisement. 

I suppose people can look past it, and perhaps these are entry points, but the fetishism is ultimately abandoned nonetheless. 

Subscribe to Rant