When I read Jon Talton’s article, “Washington state has to play the add-value card, not low-cost-leader ace,” the title alone got me to thinking.  With the onslaught of globalization in both now and in the coming years, we will have to take an intelligent attitude in order to remain a relevant player in the high-tech world.  As the article says, the opinions are basically divided between two camps:  appeal for money with low costs, or appeal for money with valuable, high-quality commodities.  I agree with Talton:  we need to keep ourselves to a higher standard and use our highly-educated workforce to our advantage, or else we will be dragged down in the race to the bottom.

As a major example for the detriments of companies seeking low-cost solutions, the article points to none other than Boeing, long a staple in the Northwest economy.  Boeing  recently opened a plant in South Carolina, where worker compensation, wages, benefits, and union costs were all much lower than Washington.  Yet if South Carolina is playing the low-cost card, why do they have the nation’s fifth-highest unemployment rate?  Talton asserts that it’s falling victim to its own game – other countries and locations are undercutting even South Carolina’s cheap labor, so if everyone races to the bottom, many more people will suffer.

I agree with this sentiment.  Here in the Puget Sound region we have not only Boeing, but Microsoft, Nintendo, Amazon, Starbucks, Bungie, and numerous other high-tech/commodity companies.  Seattle is the nation’s most educated major city, with 47% of all residents age 25 and over in possession of a bachelor’s degree or higher.  We must use these strengths to our advantage.  If we choose the low-cost path instead, the standards of living and money flow will invariably be negatively impacted, as with South Carolina.  If we instead use these people and companies to lure money, we can continue to enjoy our status as a major hub of high-tech innovation.  In a time of disruptive changes, we can hang on to a higher level of quality and stability if we choose not to send our jobs overseas but to develop more valuable, high-tech jobs here.

Dev Patnaik’s article, “Forget Design Thinking and Try Hybrid Thinking,” was a refreshing take on thinking in the worlds of business and design, misleading thought the title may be.  At first glance one may think the article criticizes design thinking, but quite the opposite is true.  The crux of his argument is that contrast produces wonderful results – having someone experienced in the design field wouldn’t necessarily be more advantageous for innovation than someone with a business background.  Of course, this depends on the individual and the flexibility of their thinking, but, with some design thinking added into the mix, even accountants can be creative designers.

The main example Dev gives of this “hybridity” is with Claudia Kotchka, who was hired on in 2000 at Proctor and Gamble as VP for design strategy and innovation.  The company was struggling with the digital and media transition taking place, so they needed someone to turn things around for them, and that is exactly what Claudia did.  Thought she had an accounting background, with the right design thinking she doubled the company’s revenue over the course of the next eight years.  She did this by placing designers in the company’s business units, educating businesspeople about design’s strategic impact, and forming a board of external design experts.

All this goes to prove Dev’s main point about hybrid thinking.  Had Proctor and Gamble continued down their tried-and-true product design process, with the same business practices, they would have found themselves in dire straights instead of in a successful, innovative position.  Claudia’s thinking like a designer saved them, but that in and of itself wasn’t enough – it was that combined with her seemingly incongruous background which proved beneficial.

Like Dev, I also believe that hybrid thinking is quite a potent tool for innovation.  It is precisely this confluence of various disciplines that creates new things.  When a businessperson is given the task of design innovation, they must change themselves and immerse themselves in the new school of thought.  Otherwise, stagnation results.  Once they understand this new discipline and begin to apply both types of thought to the problem, creativity abounds.  They are free to attack the problem from multiple angles.

So, while design thinking is ultimately the key for success in the future of business and corporate America, it alone will not change things.  People like Claudia, who are inexperienced designers but flexible thinkers, are just as valuable as experienced designers, with whom they can interact and formulate new ideas.  Hopefully, enough businesses will realize this and help steer things in fresh directions.

Leonard Herman’s book, Phoenix:  The Fall and Rise of Videogames, is an exceedingly detailed compendium documenting the history of the video game industry from its infancy through the year 2000.  It leaves no stone unturned – seemingly every company which had anything to do with video games and their development is mentioned, from Atari to Nintendo to Sony.  If a company made a briefly-seen peripheral for the Mattel Intellivision, it is mentioned.  If IBM joined forces with Atari in 1993 to manufacture Atari’s last console, the Jaguar, it’s mentioned.  There are so many dates and product names floating around that sometimes it’s hard to keep track of it all, even to one experienced in video games.

Herman starts with the absolute beginnings of all things which related to video game development.  From the abacus he works  his way up through the first computers, detailing everything from vacuum tubes to transistor radios.  As computers got smaller and more efficient they led to the first video game being developed by Ralph Baer.  One of the first primitive games, Space War, was noticed by a young college student named Nolan Bushnell, who founded Atari in 1972 and helped to launch the video game industry.  At first Atari took over the arcades, but later they moved into the home video game market with their 2600 console.  For the first ten years of the industry’s existence, Atari dominated all things gaming-related.

Then, in 1983 and ’84, due to a glut of cheap, terrible  software overloading the fledgling industry, video gaming and Atari collapsed.  For two years sales slumped before Nintendo and its NES revitalized things, and they would come to dominate the industry for the next ten years to come.  During the early to mid-nineties Nintendo and Sega battled things out with their 16-bit SNES and Genesis systems, respectively.  Then, in 1995 Sony launched their Playstation console, which was CD-based, and effectively took Nintendo’s crown as the king of the video game market.  Sega, with their Saturn, struggled in third place from then on.  The book ends with the discussion of the Playstation 2 launch, Sega’s Dreamcast launch, and plans for Nintendo’s Gamecube and Microsoft’s XBox.

For the most part I was engrossed in reading Phoenix.  I genuinely feel that, after gaining such detailed knowledge of my industry’s background, I have another depth of understanding and appreciation for the field and the companies who contributed to it.  As I read, Herman’s writing style helped with my absorption of the information.  The book reads very much like a history textbook, with almost every single sentence containing information and events, but Herman wrote it in a balanced, flowing manner.  Throughout each chapter, which represent a year in the world of video gaming, he would also add commentary just enough that it guided the reader along and helped coalesce some of the info they just read.  Pictures of each relevant console and peripheral are included as well, helping to visualize some of the more obscure products.

Yet at the same time Phoenix is not without faults.  Firstly, the sheer density of the material and the sometimes overly-detailed histories of a product or company may lose a lot of potential readers.  Some of the depth to which Herman goes is excruciating, telling us about all the various lawsuits the major players were involved in or every random controller that came out for the Atari 2600, or even the game parks that Sega built.  It also is a bit too focused on the companies and consoles, mentioning only the most important of games if they helped or hurt a company in a big way.  Examples of this include how the Atari 2600 E.T. game was devastatingly bad for Atari, and how Mario 64 helped establish the Nintendo 64 as a powerful 3D system.  But many other major games are merely mentioned in passing, if at all.  In fact, for such the breakthrough game that it was, Final Fantasy VII was mentioned in one sentence.  Also, a lot of the book is very American-focused, not giving much detail at all to Japanese companies (with the obvious exceptions of the major console manufacturers).  Big companies like Namco, Konami, Capcom, and Squaresoft were barely there.

So, in the end, Phoenix is easily recommendable to anyone who wants to know about the video game industry.  It has every detail about the major companies involved and the major consoles which shaped the industry today;  so, if one is prepared to absorb the onslaught of information, look no further for the Video Game Bible.  It is well-written, engaging, and so knowledge-packed that even the harshest of video-game critics will be wowed at its contents.

As I love Apple, I figured I’d do one of their products for my Heuristic comparison.  Yet I didn’t want to do the iPhone or iPod, since those have been analyzed to death, so I instead decided to focus on Mac OSX.  This is one of their products which is most often overlooked, but is just as important as the aforementioned iThings.  I am both a PC user (cost and compatibility) and a Mac user, but in the end I’ll always prefer Mac OSX.  Here is its Heuristic evaluation:

1)  Visibility of System Status. When using Mac OSX, the system status is always readily apparent.  Open applications have a blue dot below them to stand out, the active window is dark while the others are faded, and the always-visible Menu Bar at the top will change to reflect in what program you are.  If it’s a MacBook, battery life is displayed in the upper-right.  In both Macbooks and desktop Macs, other indicators are also displayed in the upper-right, such as the time, wireless Airport status, and Bluetooth status.  If the Mac ever gets overloaded and freezes up, the user definitely knows because the dreaded Spinning Ball of Doom replaces the mouse cursor.  Knowing what is going on with your Mac is never a problem.

2)  Match Between the System and the Real World.  Since this is an operating system, it doesn’t exactly resemble anything in the real world physically, but elements within do correlate to real-world equivalents.  One example of this includes the ubiquitous office naming system – i.e. files, desktop, and some Mac-unique concepts like Stacks (multiple-file organization on the dock) and Spaces (multiple desktop workspaces).  There is also the dock, which houses the most-used applications and resembles a boat dock.  Icons in Mac OSX also look very much like their real-life equivalents, right down to the Mac Hard Drive icon.

3)  User Control and Freedom.  The user is free to do as they please and has complete control over all shortcuts in Mac OSX.  He can open any number of applications and/or windows that he pleases, customize keyboard shortcuts for features like Expose, and adjust just about anything in System Preferences.  If he wants to close a window but keep the application running for easy future use, he can simply close the window and Mac OSX keeps it running until he actually “quits” out of it.  Desktop and screen savers can be customized heavily as well, including not only the desktop image itself, but the organization of files on there (this may seem like a no-brainer, but it is important).  Hard drive contents are easily searchable with Spotlight.  Workspaces can be changed with the Spaces feature.  And let’s not forget Time Machine, an automatic backup system built into 10.5 and later.  With this feature, not only can the user back up when he wants to, but he can “time travel” to recover a file from any point in the past, like if he changed something, saved, and decides he wants the earlier version instead the next week.

4)  Consistency and Standards.  Starting with Mac OSX 10.5 Leopard, all windows and Apple-built applications in the OS have the same uniform look and feel to them.  The Menu Bar at the top is always there, only changing contents to reflect each program.  Icons for all programs have the same look and feel to them, no matter by whom they were developed.  In all the Apple-supplied programs, graphics are similar, like with iTunes’ Coverflow and iPhoto’s photo-viewing options.  Everything has the same polish and clean design.

5)  Error Prevention.  This is HUGE on Mac OSX, and one of the reasons why I like it so much.  It has error prevention built into the core.  There are virtually no viruses, despite the rising number of Mac users, and not nearly as much spyware can glom onto your system like on Windows.  De-fragmentation is basically unneeded except in the direst of cases, since the OS does it in the background for you.  And the OS is stable.  Programs may crash sometimes, and the computer may get overloaded like any computer, but the vast majority of the time Mac OSX will not crash.  There are no Blue Screens of Death.  If a program is acting unruly, the user can right-click on the program’s icon in the dock and Force Quit it, no Ctrl+Alt+Delete needed.  Simple as that.

6)  Recognition Rather than Recall. This is something Apple had in mind from the very beginning when they designed the Mac OS.  Unlike Windows, which was developed by and for engineers originally, Mac OS was developed to be easy to use for the average person.  Once one goes through the basics of how to navigate files and programs, it is stupidly easy to do it again.  And yet, for one used to Windows, Recall may be much more strained when making the switch.  The Menu Bar is at the top instead of the bottom, system options have to be accessed through the Apple icon, the close button for windows is in the upper-left instead of the upper-right, and the OS is application-based, not window-based like Windows.  This last point can be particularly disorienting for someone who is used to the fact that the program is closed when the window is closed, for it is not so on Mac OSX.  But all these problems are not really that big of a deal after a little getting used to.

7)  Flexibility and Efficiency of Use.  Here we have two conflicting points.  While the OS itself is flexible (see above’s customizable examples), the actual development is not.  Most people, myself included, do not see this as a problem since we’re not programmers, but to the development community Apple’s tightly-controlled, closed-source platform is stifling (Windows is also closed-source, as an aside).  This means no one can do anything to change it except Apple, so technically this is very inflexible.  I suppose that’s why Linux is around.  But Mac OSX is very efficient, especially 10.6 Snow Leopard, which has a very tiny footprint on the hard drive and is easy on resources.  Using the OS is also efficient, since there is no bloatware installed and there is no slow-down (with the obvious exception of running highly-intensive programs).  Searching is a breeze with Spotlight.

8)  Aesthetic and Minimalist Design.  This is Apple’s mantra.  Avoid excess.  Everything in the OS is highly polished, clean, and aesthetically pleasing to the eye.  Soft gradients are used extensively in windows and buttons, and, unless the user FUBARs the desktop or dock, everything there is also neat and tidy.  It’s so well done that the latest Windows iterations have been imitating it in terms of shiny design and smoothly-flowing animations.

9)  Help Users Recognize, Diagnose, and Recover From Errors. In the event a problem occurs, Mac OSX is very helpful in providing the user with information.  If a program crashes, a dialog box pops up and tells the user their options (send error report, re-start program, etc.).  If your Mac is frozen, you’re definitely told so by the lovely little spinning beach ball.  Also, if there are connectivity problems or something of the like, the OS will guide the user through what to do, such as if the computer can’t connect to the internet and the Network Manager automatically kicks in.

10)  Help and Documentation.  With every Mac comes both documentation and the OSX backup discs, so if something REALLY messed up happens, all is never lost.  There is also the Help option in the Menu Bar at all times, no matter what program.  And if you really can’t figure it out, Apple’s website is chock full of help documents and forums for solving the problem, not to mention Apple Stores and their Geniuses.  OSX is well-supported.

So, after evaluating Mac OSX with the Ten Heuristics, I’d have to say it scored pretty highly.  Aesthetically pleasing, easy and efficient to use, powerful and stable, and well-supported, once one gets used to Mac OSX it is hard to think any other system is better.  It’s not perfect, but I’d say the reason why most people don’t like it is simply unfamiliarity, or a general dislike for Apple clouds their judgment.  Can’t really fault the OS for that, though.

Mac OSX Snow Leopard screenshot

Sharing a Design of Mine

October 16, 2009

This is a picture of a 3D Torii gate I made, part of a level for an Intro to Level Design class.  It is a blended shot – final look on the left, fading into the black and white geometry shot on the right.toriiCam

Brian Rejack’s article about video games’ relationship with historical accuracy was an informing read, discussing some problems about which I as well had previously thought.  For the article, Mr. Rejack used the game Brothers in Arms:  Road to Hill 30 as a case study.  His basic premise was that, while video games may use convincing narrative and visually accurate virtual worlds to simulate reenactment of historical events, they ultimately fail when compared to real reenactments because of the detachment involved in playing them.  This is something with which I can both identify and disagree.

The article begins with a look at an advertisement for a 2006 History Channel [then] new WWII documentary series, Dogfights.  Rejack examines how the documentary makers use virtual graphics to emphasize its excitement and accessibility to viewers, which leads him to Brothers In Arms, a 2005 video game set in World War II, during and after the D-Day invasion.  He goes on to talk about the game’s narrative, from a soldier’s point of view, and its virtual worlds, all modeled very accurately; however, he next asserts that these visual and storytelling techniques can only carry historical authenticity so far because “The player’s sole opportunity for interacting with the other characters occurs during a firefight, something that keeps the emotional register permanently ratcheted to fear” (Rejack 2007).  This is very true.  In order to experience the emotional engagement and understanding a reenactment brings, the participant must become involved.  During a typical reenactment the actors must coordinate with each other while experiencing the physical connection with the place, things a video game can never convey.  The playing experience is usually solitary and focused on gameplay, with the connection experienced only in brief pre-designed cutscenes.  Is the game fun, or is it accurate?  This self-contradicting trait is inherent in the designing of historical games, and it takes away from the credibility of the game as historically authentic.  Even with unlocked extras that shed more light on the historical side of the game, the player is still limited in their historical understanding because they are focused on shooting enemies instead of connecting with the event going on.

After discussing these limitations, Rejack points to another game which presented interaction in a different way, Façade.  In it, the player moves through the game by choosing different interactions with two virtual characters.  He says that if Brothers In Arms could present the same visual detail, but mixed with Façade‘s method of story progress, Brothers in Arms could engage the player much more effectively.  This way, the game brings the player to a more personal level by having them talk and interact with virtual people at the virtual historical scene, and the responses they receive could help them understand the event more powerfully.  This point is something on which I totally agree, as someone who wants to go into the gaming field.  I have looked at other games, such as Mass Effect (2007), and have found that this level of control over dialogue and therefore events does indeed heighten the player’s experience and connection to the virtual world and characters.  Narrative presentation is something which is constantly evolving in the video game field, and Rejack makes a good point about how static, cutscene-based storytelling can detract from the overall experience.

And yet, while I agree with the article’s points about the separation of historical understanding and the game’s structure, I disagree that the game’s use as a tool for understanding is as limited as he says it is.  In my view, it is still far better for a person to see and hear a virtual representation of the event than to simply read or hear about it.  If they can be engaged as fully as one is when playing a game, all the better.  More character and situational development simply needs to be thrown in to the action mix in order for it to succeed.

Source:

Rejack, B. (2007). Toward a virtual reenactment of history: Video games and the recreation of the past. Rethinking History, 11(3), 411-425. http://168.156.198.98:2136, doi:10.1080/13642520701353652

I believe the article is credible for the following reasons:  Rejack has an extensive bibliography on where he drew his ideas, both third-party and firsthand; his article was published in a peer-reviewed journal; all the information presented is completely objective and free of bias; and the article is still current, having been published in 2007.

John Maeda TED Video Response

September 27, 2009

I found John Maeda’s lecture on design and computers to be interesting. His viewpoint, coming from a seemingly un-design-related field, was both refreshing and different from my view at the same time.

Maeda’s take on computers and design was valuable to see, since he started with the purely technological side back before any sort of artistic concepts had arrived. I liked seeing the videos of his experiments, showing how he was a pioneer in transforming the technical world of computers and programs into something which could be used for art, such as with his Adobe Illustrator-like prototype program. When he described how his left-brain professors found his program useless and illogical, that showed me that he was truly determined to change things.

And yet, when he said he used technology but didn’t like it, I couldn’t help but disagree in some ways. I do agree that the more we use technology, the more disconnected we get from the humane side of ourselves, but I also try to find more of a balance and enjoy the technology for what it is and what it can do for us. When I am working in my 3D programs such as Maya or ZBrush, I marvel at the feat of technical engineering which goes on in the background of the program and genuinely appreciate these technological outlets for my creativity. They become a part of me, integrated into the balance of technology and design which I, like Mr. Maeda, always strive to perfect.