“People are very used to cooking,” and want to feel productive and make something, but say they don’t have the time. “They always say they wish they could cook more but feel like they don’t have the time for it.”
Munchery CEO and co-founder Tri Tran told TechCrunch in a blog post today on Munchery’s new service to provide ingredients and recipes delivery in addition to their pre-made meal service.
I hate to be this guy, but come on, there’s more to it than that.
I’m not criticising Munchery’s service, in fact it’s a fantastic idea, particularly for couples or families looking to do something together around food that’s more than picking up takeout. People do want the experience of cooking, and the service takes the overhead out of shopping and specifically choosing what to eat from the endless options at the grocery store. Or as Munchery calls it - the “so-called grocery market.” Who says that? Were grocery stores somehow a contentious topic? Does pushing against competitors require political devaluation? Anyhow...
Plus people don’t seem to know how to portion things like onions and rice if they don’t cook a lot. And that’s really the trouble with this service - most folks don’t know how to cook well, particularly when stacked against other competitors.
This is essentially take out at the end of the day, and an expense in money and time with reduced service. Flavor from the raw vegetables / grains / proteins aren’t the only ingredients. Your teriyaki down the street tastes so damn good because the cook there has grilled more chickens that people he’s met in life, and the grill itself, charred with the flesh of thousands of thin chicken breasts, isn’t something you can whip out for date night.
Damn - generally people don’t even know how to brown meat. I’m not trying to be a critical, it’s true, check your meat the next time you pan fry something. The recommended heat and fry times from your Munchery bag aren’t gonna help much either unless you have the test kitchen the recipe was prepared in.
Part of the reason people don’t have time to cook is typically because they don’t know how to cook effectively. So unfortunately there’s gonna be some disappointment. After all, a home cooked is more than take-out deconstructed.
...But just not you. Better writers than I have already taken on the subject on the importance of physical feedback in game and digital interactive in Codename: Revolution, specifically targetting the failures of the Kinect versus the Wii.
But personally, I can't see this demo:
And not think about a section of Errant Signal's review on Watch Dogs
I’ve already written on the excellent Platform Series from MIT Press, and since discovering the series, the platform that I have been waiting for is the one that meant the most to me growing up – the Nintendo Entertainment System.
I AM ERROR by Nathan Altice finally dives deep into the hardware architecture and the expressive capabilities derived from the chips in the NES. I bought the book immediately, but surprisingly I wasn’t that engaged as I thought I would be.
ERROR is an excellent book, incredibly detailed, and well-researched with a breadth of topics from hardware development in Japan, assembly coding of music, and ROM hacking across the internet in the early 90s.
These topics aren’t why ERROR didn’t pull me in. Instead, it’s a victim of the series own success. Racing the Beam, the inaugural title from the series covering the Atari 2600, is also amazingly detailed particularly in the translation of games concepts through the examined hardware architecture to actual expressive gameplay.
ERROR mirrors this same level of translation, which is after all the intent of the series, and while the NES has a lot more going on, the ah-ha nature of the translation described above is much clearer in Beam and more accessible.
ERROR moves around a lot more, and specifically is concerned with the idea of “translation” given that the NES is also known in its home country of Japan as the Famicon. The book’s goal from the introduction is to determine how many perspectives on translation we can take when studying the NES, and it’s a brilliant thematic idea.
But, the hardware isn’t as interesting in this case if you have been following the series, and honestly, for a person of my acumen in the realms of PCB and chip technology, was a little over my head at times. That said, maybe one day I’ll appreciate it a lot more.
In contrast, two other titles in the series - The Future Was Here and Codename: Revolution - took their platforms, the Commodore Amiga and the Wii respectively, and while touching on the hardware, explored other aspects of expressive interaction with the machine beyond its hardware to a point where the hardware was only a stepping off point.
It’s like getting really into the details of how strings and pickups on a guitar interact, whilst most folks would only be concerned with “what does it play?”
Future primarily covered the expressive gaming and the demoscene provided by the new multimedia computer, and Revolution discussed what it meant for software and hardware to interact in a physical space. Sure, hardware had to be mentioned to start these conversations, but again, they are our baseline for further exploration of artistic and human ideas transformed into a digital medium. In this way, the books remind me a lot of 10 Print’s vignettes on maze generation code.
Fairly, I should backtrack on my criticism, which only comes from an embarrassment of riches. The intricacies required to program games cleverly on the NES are amazing, deserving a nod of respect to those developers, and are a rich primer on how graphics programming developed into a higher level of complexity than the Atari. Likewise, the chapter “2A03” on the sound chip architecture in the NES would be my first recommendation to anyone interested in sound chip programming and nice slice of humble pie for any contemporaries who currently do it with any degree of ego.
Finally, the chapter “Tool-Assisted,” while a titled after popular tool-assisted speedruns (and I’d note as well glitch fun), has a wonderful and well-explained history of hardware emulation, digging deep into IBMs history, that for even a software person not interested in games, is interesting on its own and of emergent importance for emulators used in production / development environments.
Overall, I’d recommend buying the book if you have been reading any of the Platform Series, as it is likely the most well-researched book as of yet, and if you have not read any of these titles and are not a hardware nut, reading Racing the Beam and I AM ERROR back-to-back would be excellent combination to get you started on that path.
Silicon Valley is funny. Mike Judge has a lot of cred in finding the absurd in the modern middle class and suburban, which as the shows executive producer, this style is expertly done with Valley. The show has always done a great job of actually telling jokes, and finding the humor from character motives rather than tacking what they could shoehorn in.
I really liked season one of this show, and I liked season two, which recently wrapped up its run on HBO.
Unfortunately, the show is already showing fatigue. The show’s plot revolves around the continual fight of Pied Piper versus the CEO of Hoolie attempting to gain control of Piper’s middle-out data compression algorithm through any means necessary as the staff of Piper try to secure funding to fully launch their product.
The problem with this setup is that it is completely episodic and random – "Oh look, Hooli is pulling some bullshit legal tactic, oh now they’re doing something else I already forgot about because it’s brushed off as soon as it happened, WHAT? our angel investor is a crazy person and now he’s doing weird eccentric thing X, which as with Hooli we’ll discard as a memory and plot thread when the credits roll."
It’s not a story that actually builds, and because of that, when the Piper crew eventually succeeds, I don’t really have a sense that they were up against much, but just annoyed throughout the season.
In contrast, season one actually built towards the Tech Crunch Disrupt, and involved more than outside annoyances - the relationships of the team, transition to a real company, and competition. A lot more intuitive, and as an audience member you can anticipate conflict and what you would expect from a growing company.
Which all of that totally sucks as this season actually had more heart.
Richard’s pep talk that the team was there and existed to “build epic shit” alongside Jared’s maudlin speech that he has had the best time being at Pied Piper even though it involved so much stress, which takes something really commodified, the start-up, and actually gives it some warmth.
There’s a lot of folks in Silicon Valley and here in Seattle less concerned about what they are passionate about, and more concerned with what will sell, and this is more insidious than some huge company like Hooli. It’s an internal and intentionally adopted destruction of one’s dreams, rather than an antagonistically driven defeat.
Drupal developers typically follow this pattern when they start to work with layout systems: nodes to blocks to maybe panels to Views. Now these are obviously intertwining and of course a lot of us jump into custom templates and Views-like modules that aggregate and filter content.
As a Drupal dev, looking at these elements the progression seems pretty obvious and is natural to content management. First you create content chunks, then you arrange more specialized content pieces, and finally you start creating lists and contextual filters of content.
Speaking with clients who are actually going to use your system, however, this progression to and use of Views isn’t as natural or intuitive.
For a lot of folks not in web development, pages of content are just that - individual documents that stand alone. They aren’t nodes, they definitely aren’t fields, and from there the concept of filtering nodes to access fields is a conversation that isn’t going to translate well between dev and non-dev. But particularly when using systems like Taxonomy or Nodequeue with Views, where curation is a focus, it’s critical that users understand the basic idea.
So what is the best way to get these ideas across?
Tech terms is obviously a non-starter. Referencing SQL or the index.php query system won’t help. Neither will the name “Views”. It’s just too general, and worse yet if a person has enough knowledge to understand what a “view” is within MVC, as it can be easily confused with templates.
Thus, keep terminology to “Lists” and “Filtered Lists” or whatever wording works best.
The simplest way to explain Views from there is to explain that these lists are lists of pages that use their parts, and we can set filters to “go get” the pages we want and just get their “Title” for example.
Here again, saying “fields” and “content types” is not very helpful either. Likewise, when discussing filters, if you’re using Taxonomy, “label” or similar may work best since that’s more common in other SaaS platforms to do the same function.
From there, concepts like limit / count, pager and such are a little more familiar to most people involved who use the web.
The key point therefore is to step out of the Drupal lingo and start thinking in terms of what’s more common. If users reach the point where lingo becomes important, then it’s fine to switch over, but until then, communicating the key operational concepts is more vital than keeping things in Drupalesque.
And indeed the book did exactly what it said it would do and no more, a result that a lot of programming books in general fail to deliver on. Simple question - how do I solve the tunneling problem in collision detection? For Schwarzl it’s a couple pages, and for a lot of books their thoroughness in answering this programming problem can become less of a technical issue and more of a readership problem.
While I wouldn’t knock books attempting to provide a lot of knowledge for my page-buying buck, I would say instead that there is a space missing in programming literature, which is the small book, and in particular, the small specialized book.
I have a couple of algorithm books on my shelves (well, in stacks on my floor), but some of these, while serving as rich collegiate textbooks aren’t anything that can really engage the beginner or improve the intermediate developer.
By all means, I won’t return Sedgewick’s Algorithms, however, I would like to see books along the lines of 4 Sorting Algorithms that barely trump 90 pages. Books of this variety wouldn’t be intended to be the end of all of study, but could provide a low cost of entry into a new subject without intimidation, high cost or excessive detail.
Sometimes, you just want to be told in a few sentences what a bubble sort is without, for the moment, worrying about its O-notation compared to other algorithms. Similar series elsewhere - A Very Short Introduction, How to Read…, and every book on meditation - demonstrate the practicality and demand for reading of this type.
As literacy in programming expands, I do expect that these types of books will appear more consistently and we’ll see a move away from textbook or language survey tomes and more about how to use languages involving very specific topics in a way that readers can quickly consume and then apply their understanding.
Course I guess I could start writing them myself...
This weekend, after completing the ARDX Experimentation Kit for Arduino, I decided to think of the simplest possible game I could using a variety of parts provided in the kit.
What I arrived at was Is It Even? - a game that challenges players to indicate yes or no to whether or not a displayed binary number is even. Now, for anyone who knows how to read binary, this is a pretty damn simple game. But the intent was to combine a few elements that would fit on the board that came with the kit.
For knowing nothing about these devices, it was amazingly simple to put together the project, and the examples in the experimentation kit, more than Arduino C code examples, are very empowering to play around with. I highly recommend this kit for any beginners.
Now if only I could think of something that isn't so useless.
In the final chapter of Steve Jobs, author Walter Isaacson assembles a bulleted list of Jobs’ greatest career contributions – the iPhone, Pixar, the whole App ecosystem – and of course the Macintosh, which Isaacson describes as: “[it] begat the home computer revolution and popularized the graphical user interface.”
“[The] rosy picture of Apple starting the microcomputer industry crumbles under inspection,” he writes in the introduction to his book, “Commodore put computers in the hands of ordinary consumers.”
Bagnall replied with his own list of what he believes were Commodore’s successes – first to sell a million computers, first major company to show a personal computer, and the first to release a multimedia computer.
In this way, Bagnall’s book begins as a direct challenge to Isaacson’s book, but aside from this opening salvo and fighting over turf, the books are excellent compliments to each other.
While Isaacson’s book is not strictly about Apple the company and Bagnall’s book is about the Commodore as a whole, both books have a lot in common - each author had a tremendous amount of access to the people involved in the history of these companies and saturated their book with quotes and firsthand sources, and both are very concerned with and detailed about late 70s and 80s computer history.
Commodore, who closed their doors in 1994, was the progenitor of the Commodore 64 and the manufacturer of the Amiga. Commodore has become emblematic of the shifting sands in the computer industry in the transition from the 80s to the 90s. Commodore had money, had technology, had vertical integration (much emphasized by Jobs in his biography) and yet, they couldn’t survive. While Bagnall has yet to publish a long-awaited follow-up to Commodore, The Future Was Here by Jimmy Maher, a book profiling of the Amiga, explains that Commodore just simply didn’t know what to do with their computers or how to market them as multimedia became a requirement of consumers. However, Jobs certainly knew how to market his machines.
Bagnall’s book makes a thorough and persuasive presentation of the contributions of Commodore and notably its technological heart Chuck Peddle. You can’t read Bagnall’s book and then look at Isaacson’s bullet point list of Job’s accomplishments without thinking “Well…..”
Unfortunately for Bagnall’s subject, history is written by the victors. Or rather, fans of the victors it seems.
This collision of these differing authors is central to a lot of what is currently being written about personal computing history. There’s a plethora of books available now that attempt to state who invented the computer, or who sparked the revolution (since it needs to be called that for whatever reason), or what influenced the revolution. Ultimately boiling down to the question - who was the first, such that they deserve credit?
Commodore, to its credit, is getting recognition - the documentary From Bedrooms to Billions, the aforementioned The Future Was Here, and the tremendous amount of retro computing interest has brought the company’s technology back into popular interest. Commodore for many of us, does sit as a time capsule of that era, perceiving its products unaltered by its present form, as is the case with Apple.
For myself and my near-peers, computing history is not just industry on its own merits, but also valuable because it is our personal history as well. We experienced that history, and many of the details filled in by books like Isaacson and Bagnall’s are enriching simply because we can say to ourselves “Oh yeah, I do remember that.” It helps us to explain and understanding the evolution of the digital world, one in which, we directly inhabit and is still quite new.
However, I don’t care who started it. Computer history is always a tremendous confluence of factors that assigning historical responsibility is a pointless task. Oh absolutely, individual people had tremendous impact or influence, but I can’t say Alan Turing started the computer revolution more than I can say Morris Tanenbaum did as well. In fact, it is the combining factors that makes the field of study so fascinating. The iPhone wouldn’t be half as interesting if we didn’t have social outlets such that we always had a reason to be engaged with and notified by our phones.
I commend Isaacson and Bagnall on their enormous efforts to document the history of personal computing and choosing such large subject matters and persons of importance. While I agree that Commodore has indeed been downplayed as Bagnall claims, the whole problem isn’t that Commodore isn’t getting its just desserts from history. Instead, the motivation to claim ownership of the digital age is going to be there regardless of history, especially when money, power and success are intertwined.
Most game programming books are one of two things: very specific – particular game engines, physics, rendering, AI – or very general, for the beginner.
Robert Nystrom’s Game Programming Patterns is right in the middle.
As the title indicates, the book doesn’t focus on a particular engine, individual component or language, though examples are given in a stripped down C++. Instead, Nystrom’s book takes its cue from the Gang of Four’s Design Patterns - presenting a problem, then a pattern that attempts to resolve complexity with higher aspirations of reusability and performance, both for the computer and the programmer.
GPP is an excellent companion to its inspiration for even the general design pattern enthusiasts. Games, at this point in time, are something most programmers have some level if not expert knowledge in, even if that’s only as a player. Framing design patterns within this context allows Nystrom to provide examples that are relatable and can move away from the Gang of Four’s need to be as neutral as possible when outlining their ideas. In other words - it clicks better because the content is very close to our experience.
The book is organized around groups of patterns and most of the chapters are focused on problems that are unique to game development.
The first section revisits several of the Gang of Four’s canonized patterns – command, flyweight, observer, prototype, singleton, state – even taking a number of them to task as misused, bad or at the very least poorly described by the original authors. Nystrom certainly could have chosen other patterns (composite, builder and visitor spring to mind), in fact, it’d be awesome if he extended the series, but he had to find a limit someplace.
The other sections dive more deeply into game programming specifics including sequencing, behavioral structures, decoupling, and optimization, but the descriptions still keep the discussion on the pattern level. These chapters cover problems that specifically arise in game development, as opposed to say web development, and Nystrom is even more strong-handed in the tone - use the pattern when you need it, not just because.
The patterns discussed near the end of the book specifically belabour this point, as their optimizations aim really make code more complex and difficult to debug, but can greatly affect performance for games that need it.
Overall, Nystrom’s book was a like a great pair programming buddy. He presents most patterns in the book initially as “hey, let’s just get it working”, then through the introduction of a pattern, refactors, and then asks question of “do we really need this?”, all the while with asides and geek notes in the sidebars.
While this information can come from other places targeted at game developers, Nystrom’s a good writer. I mentioned above that this book fits in the middle, in truth it probably does lean more towards the beginner, that said, the value of the book for the more experienced is to gain another perspective on patterns and code practices from an author who delights in it.
Game Programming Patterns is indeed useful for its titular topic, but as a whole, it made programming, patterns used or not, just fun.
Above is the plotting of the growth of z-indices over time in a project in CSS. Earlier this week I was told that my QA team was discovering websites in our network that had z-indices of upwards of 90,000,000. Hence I have developed a hard observational rule about z-index:
The higher a extant z-index, the greater the increase in the z-index to override it.
I have no reasoning for why this rule exists, perhaps it's ego or frustration to just be on top, but take some time and observe z-index in yours and other's projects and you'll find it to be true. Around 100, people will jump between 90 and 110, once you get to 500, you're in the realms of 50 increases, and past a 1000, you'll see serious gains. Finally, someone will just say fuck it and start placing 500+ gains.
I've had the fortune of seeing good style guides, where the developers define actual layers to their application to ensure that items are properly scoped. The joys of a well designed application. Unfortunately, when dealing with CMS structures, embedded iframes and similar black box situations, "fuck it" becomes a definitive answer.