John Berger’s “Ways of Seeing” was an eye-opener, no pun intended.  Despite having read only the first two chapters, I can already sense that this man knows what he is talking about.  He mainly focuses on art and images, describing the various perspectives and ways of seeing we experience.  My particular focus on this post is perspective itself, in the artistic technique sense.

Berger talks about how for most of history, paintings and man-made images had a particular viewpoint into which the viewer is thrust by the image creator.  When perspective is used to present the image, the creator is, without the viewer realizing it, making the viewer see things in a constructed manner.  The viewer, looking at things from the perspective desired by the creator, then understands the image and all its connotations the way the creator intended.  For example, if someone looks at a worm’s-eye perspective photograph of two men arguing, the photographer has made that person look at the subjects in a very psychologically-altering way.  The men may appear to tower over the viewer, and, ipso facto, appear more powerful and convey more energy than a simple, medium shot pose.  That is the intended effect.

Yet, not all images are intended for any particular person.  Take this image, for example.  Picture of Space and a NebulaIt depicts a vast, mysterious expanse of space, with a swirling nebula in the center.  But there is no camera effect, no sense that a particular person is looking at this, no perspective.  This contrasts with this astronaut image, which obviously channels the viewer’s eye in a certain way.  Astronaut Perpective PictureOpposite from the nebula picture, this places the viewer firmly right in front of the astronaut.  The idea that it is the viewer, and only the viewer, who is seeing this pervades.

As we can see, this singular viewpoint exhibited by the astronaut image is the most common type, found in most photographs and most paintings.  But the nebula-type image of ambivalent perspective is becoming more common.  One photographer, Katie Paterson, has decided to do some work to illustrate this concept of moving beyond a single viewer.  She has taken some images and sound clips of a glacier area, one example of which is to the right.  (For more, see http://www.katiepaterson.org/vatnajokull/index.html)Glacier Picture

When viewing Katie’s work and listening to the sound clips, it becomes apparent that she understands the idea of a broader-appealing experience, perspective-wise.  The stark, serene glacier images she presents, combined with the tranquil sound clips, make the viewer/listener feel as if they are there, quietly observing nature.  From this, we can gather that a multimedia presentation of something can add much more dimension to it, thus detracting from the single-viewer idea even more.  One does not feel like she crafted her compositions for him, but, rather, for no one in particular.

In the introduction to Alan Gilchrist’s “Seeing Black and White,” the author establishes that his book will take an objective look at how we see the lights and darks of the world around us.  The foundational concepts upon which Gilchrist bases his theories lie primarily with light, the objects themselves, and, finally, our eyes.  Gilchrist asserts that what we see is based on established, ordered reality that is uniform for all who see it, and certain properties of objects and light hold true regardless of the perceptions of the observer.  This differs from behavioral theorists, who assert that what we see is entirely based on our eyes and mental perceptions of reality.   The book hits the ground running, defining some important concepts that help the reader understand what it is the author is proposing.

Gilchrist starts by looking at scientific properties of objects and the light they reflect.  Some essential terms include distal stimulus, proximal stimulus, and percept.  Distal stimulus refers to the object in the environment, completely disregarding the observer.  It could be a chair, or a fence, or a tree, but it is just that object, before the observer comes along and projects their observational standpoint on the object.  These objects are illuminated by a certain amount of light, called illuminance, and reflect a certain amount of light, called reflectance.  These are quantifiable properties — illuminance is an absolute value of light, expressed in such measurements as candles per square meter, and reflectance is a percentage based on how much of the illuminance an object reflects.  A black object, for instance, reflects 3% of the light hitting it, while a white object reflects 90% of the light.

Proximal stimulus, on the other hand, refers to the point when the reflected light from the object makes contact with the actual retinal cells in our eyes.  The intensity of light measured at each point on the retina, called luminance, can take into effect the illuminance and reflectance properties of the distal stimulus.  Now, once the light has made contact with us, we move to how our mind decodes this information.  This is the percept, and it is where things become subjective.  The example Gilchrist uses is that of the moon:  while the reflected light is close to black, we perceive it as white.

One more concept Gilchrist puts forth is the difference between two terms, lightness and brightness.  For many people, these are synonymous; however, the author distinguishes between the two.  He says that lightness is the perceived reflectance of an object.  If we perceive an object as black, with low reflectance, it has low lightness.  Brightness, on the other hand, is perceived luminance, or how intense the light appears to be when it hits our eye.  As you may recall, luminance is a combination of how much light an object actually reflects and how much light is cast on it.  This distinction is subtle, but necessary to understand for this book.

Gilchrist also goes a bit into the term contrast.  Firstly he defines it as another word for luminance ratio, that is, the perceived difference in luminance between two objects of surfaces.  However, he also says contrast can be an illusion – for instance, a black surface in light may appear to be a different tone of gray as a white surface in shadow.  Yet, when taking into account the difference of reflectance and illuminance, they are actually the same tone.  It is only because of the contrast of surrounding black or white squares, combined with the shadow which is cast on the white square, that the two appear different.  For an example of this, see this link:  http://upload.wikimedia.org/wikipedia/commons/thumb/6/60/Grey_square_optical_illusion.PNG/360px-Grey_square_optical_illusion.PNG

A little while back in this class we defined tone as the neutrality of a color, or how gray it is.  After considering what Mr. Gilchrist had to say about contrast, I now think that tone is not a solid attribute of an object, but rather a perception based on other nearby objects.  Though an object may appear to have a lighter tone of red than another, it may just be caused by the “higher of two luminances at an edge” (Gilchrist, 2006); thus, even if two red vases with the same tone were next to each other and one were in shadow, the contrast can affect how we see the tone of them.  Tone is more subjective than I previously thought.

After having read the introduction to “Seeing Black and White,” I feel as if I have a stronger, if not completely clear, idea of how we see lights and darks in our reality.  The writing was clean, but the heady and scientific nature of the content made comprehension for me a bit difficult at times.  It was only after re-reading certain sections and coming up with my own words for the definitions that I grasped more of what Mr. Gilchrist had to say.  I like how he broke down the distal stimulus, or object, the proximal stimulus, or contact with our eye, and percept down into separate ideas, rather than one all-encompassing process of seeing.  If I have time, I may delve further in to the book.

Architectural Design

January 11, 2010

My assigned design discipline was architecture, which, to put it in basic terms, is the art and science of designing buildings or other structures.  It can, however, also encompass a broader range of related concepts, such as designing outdoor spaces, designing the flow of a web page, or designing any highly structured system.  For the purposes of this explanation, I’ll stick to buildings.

When designing a building, an architect’s job might include many aspects of design, from the actual look of the building, to the project planning, to cost estimation, to the supervising of construction.  Yet there can be much creative freedom in being an architect, depending on the needs of the building.  Sometimes a building needs to be functional and not pretty, such as a basic apartment building.  More often than not, though, an architect chooses the visual style, materials, and all the ingredients he needs to create his structure.  The ultimate goal is to have a building that is pleasing for those who will use it or see it, so many elements of design are utilized to execute this.

One important element of design is texture, the tactile or visual “feel” of a surface.  A Stone CottageThis can help a great deal when an architect wishes to elicit a particular reaction, such as with this stone cottage.  The rough, jagged stone arrangement, combined with the timber wood, conveys a sense of earthiness and a connection with nature, which is fitting for a forest cottage. Grass-covered BuildingConcrete would have seemed out of place here, to say the least.  Texture is also used to great extent with this “green” building on the left.  While the primary purpose was to create an eco-friendly building here, I would also argue that the architect specifically used the terraced grass effect to make the building seem more calm and inviting amidst a sea of steel and glass.  The fact that one entire, sloped side is covered in grass also reminds the viewer of a hill, and stirs up feelings of grass running between one’s fingers.  At least, it does for me.

Form is another element of design which can be used to great effect by architects.  While most buildings are indeed square, it is perhaps this pervasive box design that can cause an unusually-shaped building to stand out all the more.  My first example is the Sydney Opera House, on the right, the shape of which evokes majestic sails in the Sydney Harbor.  The progenitors of the Opera House project could have decided on a conventional design, but instead they wanted something different.  The result has come to symbolize Sydney, and, indeed, Australia.  Yet, as I said earlier, architecture is also about function, so those sail-shaped halls were also designed to provide a unique acoustic experience.

Secondly, I point to our own local Space Needle as an instance of an impressive use of form.  Built for the 1962 World’s Fair, the Needle contains Googie architectural elements, from the skinny main tower up to the observation deck/restaurant level, that create such a unique look that the Needle continues to define images of Seattle the world over.  Even with the World’s Fair come and gone, the Space Needle’s unmistakable profile remains as a major tourist attraction.

Value is also an element of design that architects utilize.  More often than not, this is done through choice of material, the different values of which can reveal character in a building.  Take the Empire State Building, for instance.  Seen up close, the different values of limestone, combined with the crisp, Art Deco lines of the style, make for a truly elegant building.

This church is also a great example of value.  Since the architect chose brick as his medium of construction, we can see different values of brick dotted throughout the facade.  Not only can we tell the relative age based on that fact, but it lends a certain character to this old building.  Look carefully at the sheer variety of values found here.  It really makes the building have an old, dusty look.

For a local architecture company, I chose MulvannyG2 Architecture, a well-known firm whose projects include Redmond City Hall and the Tacoma Convention and Trade Center.  Website:  http://www.mulvannyg2.com.  Phone No.:  425-463-2000.

Hokay, so.

2 a)

i.  Sender – the originator of the message

ii.  Receiver – the end recipient(s) of the message

iii.  Encoding – preparing the message in such a way that it can be transported

iv.  Decoding – receipt and preparation of the message for the Receiver

v.  Channel – pathway through which the Encoded message is sent to the Receiver

vi.  Noise – outside influences which can interfere with and potentially change the message

2 b)

Sender – Corporate Dude in charge of training employees

Encoder – Message is conveyed by creating a training video

Channel – Video is distributed over the internet to all stores

Decoder – Video playback program has to decompress and format the video for consumption

Receiver – Employee who watches the training

Noise – Manager, fellow employees, or the employee himself who may have negative about the product about which the training video was made, thusly coloring the perception of the training on said product

2 c)

Let’s say there is a new section of a game level that needs to be fleshed out on a particular day.  The Art Director, the Sender, has a meeting with the art leads, who are the Encoders of his message.  They discuss what the Art Director wants, then the art leads go out and give the information to the level and environment teams.  The Channel is the subsequent email in which the art lead tells his team what needs to be done.  The artists’ computers Decode the message, then the artists, the Receivers, process and receive the info.  Noise could occur if the art leads don’t convey the message exactly as the Art Director wants it, or it could occur if other programs the artist is running distracts him from the message.  Then, if the artist has questions about what he has to do, he can either ask the art lead, or go back and ask the Art Director himself for clarification.

2 d)

The Westley/MacLean model fits 2c’s example best.

When I read Jon Talton’s article, “Washington state has to play the add-value card, not low-cost-leader ace,” the title alone got me to thinking.  With the onslaught of globalization in both now and in the coming years, we will have to take an intelligent attitude in order to remain a relevant player in the high-tech world.  As the article says, the opinions are basically divided between two camps:  appeal for money with low costs, or appeal for money with valuable, high-quality commodities.  I agree with Talton:  we need to keep ourselves to a higher standard and use our highly-educated workforce to our advantage, or else we will be dragged down in the race to the bottom.

As a major example for the detriments of companies seeking low-cost solutions, the article points to none other than Boeing, long a staple in the Northwest economy.  Boeing  recently opened a plant in South Carolina, where worker compensation, wages, benefits, and union costs were all much lower than Washington.  Yet if South Carolina is playing the low-cost card, why do they have the nation’s fifth-highest unemployment rate?  Talton asserts that it’s falling victim to its own game – other countries and locations are undercutting even South Carolina’s cheap labor, so if everyone races to the bottom, many more people will suffer.

I agree with this sentiment.  Here in the Puget Sound region we have not only Boeing, but Microsoft, Nintendo, Amazon, Starbucks, Bungie, and numerous other high-tech/commodity companies.  Seattle is the nation’s most educated major city, with 47% of all residents age 25 and over in possession of a bachelor’s degree or higher.  We must use these strengths to our advantage.  If we choose the low-cost path instead, the standards of living and money flow will invariably be negatively impacted, as with South Carolina.  If we instead use these people and companies to lure money, we can continue to enjoy our status as a major hub of high-tech innovation.  In a time of disruptive changes, we can hang on to a higher level of quality and stability if we choose not to send our jobs overseas but to develop more valuable, high-tech jobs here.

1st Amendment Interviews

November 21, 2009

My interviewees for this 1st Amendment blog assignment were as follows:

1)  Mom’s friend, late 50’s, female.  She agrees with the freedoms in the amendment because it doesn’t place limits, and she agrees with the theory that the government shouldn’t place restraints on people.  She supported all of them, except excessive freedom of press because it could be abused for panic, false reporting, etc.  She did not recognize the amendment but had heard of it, and was familiar with the concepts.

2)  Co-worker, age 25, male.  He agreed with the freedom for people to do what they want.  He was all for people doing what they feel like, as long as it doesn’t harm others.  He thought there should be no limits as long as it doesn’t negatively impact others or their beliefs.  He knew it was a law (of sorts), but didn’t know which one it was.

3)  Co-worker, age 18, female.  She agrees with the amendment, and said it was a good set of morals, that everybody should respect everybody equally as it said.  She thought too much freedom of excercising religion was bad, i.e. crazy people establishing religions that could harm others.  She immediately recognized the amendment.

4)  Friend’s grandmother, 80’s.  She agreed with and supported most of the rights, but thought that it sounded like you can’t stop someone if they have a harmful, cult-like religion.  She did not recognize the amendment, but was familiar with its concepts.

5)  My mom, early 50’s.  She thought we had the rights, but that Congress should re-establish them.  She said there was no such thing as too much freedom, and she immediately recognized the amendment.

6)  Friend of my grandfather, late 70’s, female.  She agreed with all the freedoms, except excessive freedom of press when it did immoral things, like harass people or violate their rights.  She did not recognize the amendment.

I was actually surprised at how no one blindly supported the rights listed in the 1st Amendment.  Everyone I interviewed said they supported the rights, but that if any of them harmed others they shouldn’t be allowed to be excessive.  Press and religion were the two which were most frequently mentioned as being too excessive, which I found interesting because most Americans go around madly trumpeting how great those freedoms are.  I was also surprised at how four out of six people did not recognize the amendment, though they all found it familiar and had heard of the freedoms.

Patterns across age groups were subtle, since everyone had the same basic views, but they existed.  The elderly age group, while supportive, were quick to point out a particular freedom that could be abused to harm others.  The middle age group, on the other hand, thought that not enough freedom could be given to people.  Lastly, my generation was also pretty liberal, but they were more of the mindset of everyone treating each other with equal respect.

Ultimately, after interviewing these six people about their views on the first amendment rights, I learned that most people are optimistic when it comes to freedoms, but are mindful if abuse of the freedoms can harm others.  It is interesting that people think this way, but it can also cause problems concerning the rights contained in the amendment.  When millions of people all interpret the Constitution this way and think there should be certain moral limits, we get into the gray area of what is and isn’t moral, so certain decisions have to be made by courts on how to interpret the rights.  Naturally, not everyone will agree with those decisions.  As another note, since most people were vaguely aware of their 1st Amendment rights but didn’t know they came from the 1st Amendment, I can see how this unawareness could contribute to people’s rights unknowingly being stripped away from them, a little bit at a time.  Also, if people don’t know they have these rights, they may also trade them away for a bit of perceived security, such as with the “War on Terror.”  Or they could give them away anyway.  Shame, that.

Dev Patnaik’s article, “Forget Design Thinking and Try Hybrid Thinking,” was a refreshing take on thinking in the worlds of business and design, misleading thought the title may be.  At first glance one may think the article criticizes design thinking, but quite the opposite is true.  The crux of his argument is that contrast produces wonderful results – having someone experienced in the design field wouldn’t necessarily be more advantageous for innovation than someone with a business background.  Of course, this depends on the individual and the flexibility of their thinking, but, with some design thinking added into the mix, even accountants can be creative designers.

The main example Dev gives of this “hybridity” is with Claudia Kotchka, who was hired on in 2000 at Proctor and Gamble as VP for design strategy and innovation.  The company was struggling with the digital and media transition taking place, so they needed someone to turn things around for them, and that is exactly what Claudia did.  Thought she had an accounting background, with the right design thinking she doubled the company’s revenue over the course of the next eight years.  She did this by placing designers in the company’s business units, educating businesspeople about design’s strategic impact, and forming a board of external design experts.

All this goes to prove Dev’s main point about hybrid thinking.  Had Proctor and Gamble continued down their tried-and-true product design process, with the same business practices, they would have found themselves in dire straights instead of in a successful, innovative position.  Claudia’s thinking like a designer saved them, but that in and of itself wasn’t enough – it was that combined with her seemingly incongruous background which proved beneficial.

Like Dev, I also believe that hybrid thinking is quite a potent tool for innovation.  It is precisely this confluence of various disciplines that creates new things.  When a businessperson is given the task of design innovation, they must change themselves and immerse themselves in the new school of thought.  Otherwise, stagnation results.  Once they understand this new discipline and begin to apply both types of thought to the problem, creativity abounds.  They are free to attack the problem from multiple angles.

So, while design thinking is ultimately the key for success in the future of business and corporate America, it alone will not change things.  People like Claudia, who are inexperienced designers but flexible thinkers, are just as valuable as experienced designers, with whom they can interact and formulate new ideas.  Hopefully, enough businesses will realize this and help steer things in fresh directions.

Intellectual property, something one has created and is owned by them, is quite a hot-button issue, now isn’t it?  And rightly so – without legal protection for works we have made, anyone could take that work, claim it as their own, alter it, sell it, take the credit, and give us precisely nothing back for our creation.  Which would be bad.  This is particularly important for companies and individuals who make their income off of their creations, so, naturally, there are laws in place for protection of said people.  But what constitutes Intellectual Property, and how far can someone go with someone else’s IP before they break the law?  This was one major question involved with the Metallica vs. Napster case in 2000, a landmark case which helped set a precedent for similar file-sharing cases in the years to come.

The whole shindig began in 2000, when thrash-metal mega giant Metallica discovered a demo of one of their new songs, “I Disappear,” playing before its release date on the radio.  This, of course, came as a surprise to them.  How did it get out?  The band decided to see what was up, and they traced the leak back to a website called Napster, a Peer-to-Peer file sharing network.  There, they not only discovered “I Disappear” floating around the intarwebs, but their entire catalog was also freely available.  What would any good American do when they found their stuff being misused?  He would sue, which was exactly what Metallica did.

The band, led by drummer Lars Ulrich, sued the pants off of Napster.  They claimed Napster was guilty of copyright infringement, unlawful use of digital audio interface device, and the Racketeer Influenced and Corrupt Organizations Act.  They also implicated three universities in the lawsuit, who promptly banned the site from their schools.  After the initial suit, Metallica also hired a private firm to track Napster usage over a weekend, and with the results they demanded that Napster ban over 300,000 users.  Napster complied, but by this time other artists such as Dr. Dre joined in, forcing Napster to ban another 241,000 users.  As the bands had talks with the service, Napster collapsed under the pressure and filed for Chapter 11 Bankruptcy protection.  It would later be bought by Best Buy and re-emerge as a paid music download service.

Whew.  I’d say Napster got harpooned to the wall pretty handily there.  With the result in this case, Metallica paved the way for others to prosecute file-sharing networks, helping the RIAA to enforce their position against those services and create ridiculous anti-download punishments for those who disobeyed.

I personally agree with the verdict, but think that the consequences down the road were overzealous.  So, Metallica shut down a popular file-sharing network.  I agree with this because it was indeed a gross violation of Metallica’s IP rights, but, on the other hand, it did put a damper on some potential buyers of Metallica’s albums.  Some of those people were sharing simply to sample the music, and potentially could have gone out and bought it if they liked what they heard.  People like this, who didn’t see anything wrong with file sharing, cried out against the decision as iron fisted.  Yet, in reality, I think they were a minority and, while I do sympathize with those who had good intentions, by and large the Napster users were doing something illegal, and got called out on it.  So the site went through a much-needed reorganization.  A victory for IP champions.

Yet, later, I have seen things which caused my eyeballs to almost drop out of their sockets in disbelief.  One prominent example is when the RIAA sued a single mom, Jammie Thomas-Rasset, for $2 million in damages for downloading songs.  And they won the right to sue her for that amount of money – $80,0o0 per song!  That is just insane.  Madness.  But it’s something which may well not have come around if it hadn’t been for the Metallica/Napster precedent set nine years earlier, so not everything that came out of that case was rosy.  Come on, RIAA, I understand your feelings, but $80,000 per song?  I can only laugh inwardly at that.

But the fun doesn’t stop there, in that industry.  My field, the video gaming industry, has had its fair share of issues.  One instance was when Microsoft included vibration functions in its XBox system’s controllers, but thought that the technology was so ubiquitous that it didn’t bother to find out if any patents were involved.  Turns out, a small company did indeed own the patents for the tech and they successfully sued Microsoft.  Pays to do some research.

Sources:  http://news.cnet.com/Metallica-fingers-335,435-Napster-users/2100-1023_3-239956.html

http://technology.timesonline.co.uk/tol/news/tech_and_web/article6534542.ece

http://austin.bizjournals.com/austin/stories/2006/02/13/focus2.html

Leonard Herman’s book, Phoenix:  The Fall and Rise of Videogames, is an exceedingly detailed compendium documenting the history of the video game industry from its infancy through the year 2000.  It leaves no stone unturned – seemingly every company which had anything to do with video games and their development is mentioned, from Atari to Nintendo to Sony.  If a company made a briefly-seen peripheral for the Mattel Intellivision, it is mentioned.  If IBM joined forces with Atari in 1993 to manufacture Atari’s last console, the Jaguar, it’s mentioned.  There are so many dates and product names floating around that sometimes it’s hard to keep track of it all, even to one experienced in video games.

Herman starts with the absolute beginnings of all things which related to video game development.  From the abacus he works  his way up through the first computers, detailing everything from vacuum tubes to transistor radios.  As computers got smaller and more efficient they led to the first video game being developed by Ralph Baer.  One of the first primitive games, Space War, was noticed by a young college student named Nolan Bushnell, who founded Atari in 1972 and helped to launch the video game industry.  At first Atari took over the arcades, but later they moved into the home video game market with their 2600 console.  For the first ten years of the industry’s existence, Atari dominated all things gaming-related.

Then, in 1983 and ’84, due to a glut of cheap, terrible  software overloading the fledgling industry, video gaming and Atari collapsed.  For two years sales slumped before Nintendo and its NES revitalized things, and they would come to dominate the industry for the next ten years to come.  During the early to mid-nineties Nintendo and Sega battled things out with their 16-bit SNES and Genesis systems, respectively.  Then, in 1995 Sony launched their Playstation console, which was CD-based, and effectively took Nintendo’s crown as the king of the video game market.  Sega, with their Saturn, struggled in third place from then on.  The book ends with the discussion of the Playstation 2 launch, Sega’s Dreamcast launch, and plans for Nintendo’s Gamecube and Microsoft’s XBox.

For the most part I was engrossed in reading Phoenix.  I genuinely feel that, after gaining such detailed knowledge of my industry’s background, I have another depth of understanding and appreciation for the field and the companies who contributed to it.  As I read, Herman’s writing style helped with my absorption of the information.  The book reads very much like a history textbook, with almost every single sentence containing information and events, but Herman wrote it in a balanced, flowing manner.  Throughout each chapter, which represent a year in the world of video gaming, he would also add commentary just enough that it guided the reader along and helped coalesce some of the info they just read.  Pictures of each relevant console and peripheral are included as well, helping to visualize some of the more obscure products.

Yet at the same time Phoenix is not without faults.  Firstly, the sheer density of the material and the sometimes overly-detailed histories of a product or company may lose a lot of potential readers.  Some of the depth to which Herman goes is excruciating, telling us about all the various lawsuits the major players were involved in or every random controller that came out for the Atari 2600, or even the game parks that Sega built.  It also is a bit too focused on the companies and consoles, mentioning only the most important of games if they helped or hurt a company in a big way.  Examples of this include how the Atari 2600 E.T. game was devastatingly bad for Atari, and how Mario 64 helped establish the Nintendo 64 as a powerful 3D system.  But many other major games are merely mentioned in passing, if at all.  In fact, for such the breakthrough game that it was, Final Fantasy VII was mentioned in one sentence.  Also, a lot of the book is very American-focused, not giving much detail at all to Japanese companies (with the obvious exceptions of the major console manufacturers).  Big companies like Namco, Konami, Capcom, and Squaresoft were barely there.

So, in the end, Phoenix is easily recommendable to anyone who wants to know about the video game industry.  It has every detail about the major companies involved and the major consoles which shaped the industry today;  so, if one is prepared to absorb the onslaught of information, look no further for the Video Game Bible.  It is well-written, engaging, and so knowledge-packed that even the harshest of video-game critics will be wowed at its contents.

As I love Apple, I figured I’d do one of their products for my Heuristic comparison.  Yet I didn’t want to do the iPhone or iPod, since those have been analyzed to death, so I instead decided to focus on Mac OSX.  This is one of their products which is most often overlooked, but is just as important as the aforementioned iThings.  I am both a PC user (cost and compatibility) and a Mac user, but in the end I’ll always prefer Mac OSX.  Here is its Heuristic evaluation:

1)  Visibility of System Status. When using Mac OSX, the system status is always readily apparent.  Open applications have a blue dot below them to stand out, the active window is dark while the others are faded, and the always-visible Menu Bar at the top will change to reflect in what program you are.  If it’s a MacBook, battery life is displayed in the upper-right.  In both Macbooks and desktop Macs, other indicators are also displayed in the upper-right, such as the time, wireless Airport status, and Bluetooth status.  If the Mac ever gets overloaded and freezes up, the user definitely knows because the dreaded Spinning Ball of Doom replaces the mouse cursor.  Knowing what is going on with your Mac is never a problem.

2)  Match Between the System and the Real World.  Since this is an operating system, it doesn’t exactly resemble anything in the real world physically, but elements within do correlate to real-world equivalents.  One example of this includes the ubiquitous office naming system – i.e. files, desktop, and some Mac-unique concepts like Stacks (multiple-file organization on the dock) and Spaces (multiple desktop workspaces).  There is also the dock, which houses the most-used applications and resembles a boat dock.  Icons in Mac OSX also look very much like their real-life equivalents, right down to the Mac Hard Drive icon.

3)  User Control and Freedom.  The user is free to do as they please and has complete control over all shortcuts in Mac OSX.  He can open any number of applications and/or windows that he pleases, customize keyboard shortcuts for features like Expose, and adjust just about anything in System Preferences.  If he wants to close a window but keep the application running for easy future use, he can simply close the window and Mac OSX keeps it running until he actually “quits” out of it.  Desktop and screen savers can be customized heavily as well, including not only the desktop image itself, but the organization of files on there (this may seem like a no-brainer, but it is important).  Hard drive contents are easily searchable with Spotlight.  Workspaces can be changed with the Spaces feature.  And let’s not forget Time Machine, an automatic backup system built into 10.5 and later.  With this feature, not only can the user back up when he wants to, but he can “time travel” to recover a file from any point in the past, like if he changed something, saved, and decides he wants the earlier version instead the next week.

4)  Consistency and Standards.  Starting with Mac OSX 10.5 Leopard, all windows and Apple-built applications in the OS have the same uniform look and feel to them.  The Menu Bar at the top is always there, only changing contents to reflect each program.  Icons for all programs have the same look and feel to them, no matter by whom they were developed.  In all the Apple-supplied programs, graphics are similar, like with iTunes’ Coverflow and iPhoto’s photo-viewing options.  Everything has the same polish and clean design.

5)  Error Prevention.  This is HUGE on Mac OSX, and one of the reasons why I like it so much.  It has error prevention built into the core.  There are virtually no viruses, despite the rising number of Mac users, and not nearly as much spyware can glom onto your system like on Windows.  De-fragmentation is basically unneeded except in the direst of cases, since the OS does it in the background for you.  And the OS is stable.  Programs may crash sometimes, and the computer may get overloaded like any computer, but the vast majority of the time Mac OSX will not crash.  There are no Blue Screens of Death.  If a program is acting unruly, the user can right-click on the program’s icon in the dock and Force Quit it, no Ctrl+Alt+Delete needed.  Simple as that.

6)  Recognition Rather than Recall. This is something Apple had in mind from the very beginning when they designed the Mac OS.  Unlike Windows, which was developed by and for engineers originally, Mac OS was developed to be easy to use for the average person.  Once one goes through the basics of how to navigate files and programs, it is stupidly easy to do it again.  And yet, for one used to Windows, Recall may be much more strained when making the switch.  The Menu Bar is at the top instead of the bottom, system options have to be accessed through the Apple icon, the close button for windows is in the upper-left instead of the upper-right, and the OS is application-based, not window-based like Windows.  This last point can be particularly disorienting for someone who is used to the fact that the program is closed when the window is closed, for it is not so on Mac OSX.  But all these problems are not really that big of a deal after a little getting used to.

7)  Flexibility and Efficiency of Use.  Here we have two conflicting points.  While the OS itself is flexible (see above’s customizable examples), the actual development is not.  Most people, myself included, do not see this as a problem since we’re not programmers, but to the development community Apple’s tightly-controlled, closed-source platform is stifling (Windows is also closed-source, as an aside).  This means no one can do anything to change it except Apple, so technically this is very inflexible.  I suppose that’s why Linux is around.  But Mac OSX is very efficient, especially 10.6 Snow Leopard, which has a very tiny footprint on the hard drive and is easy on resources.  Using the OS is also efficient, since there is no bloatware installed and there is no slow-down (with the obvious exception of running highly-intensive programs).  Searching is a breeze with Spotlight.

8)  Aesthetic and Minimalist Design.  This is Apple’s mantra.  Avoid excess.  Everything in the OS is highly polished, clean, and aesthetically pleasing to the eye.  Soft gradients are used extensively in windows and buttons, and, unless the user FUBARs the desktop or dock, everything there is also neat and tidy.  It’s so well done that the latest Windows iterations have been imitating it in terms of shiny design and smoothly-flowing animations.

9)  Help Users Recognize, Diagnose, and Recover From Errors. In the event a problem occurs, Mac OSX is very helpful in providing the user with information.  If a program crashes, a dialog box pops up and tells the user their options (send error report, re-start program, etc.).  If your Mac is frozen, you’re definitely told so by the lovely little spinning beach ball.  Also, if there are connectivity problems or something of the like, the OS will guide the user through what to do, such as if the computer can’t connect to the internet and the Network Manager automatically kicks in.

10)  Help and Documentation.  With every Mac comes both documentation and the OSX backup discs, so if something REALLY messed up happens, all is never lost.  There is also the Help option in the Menu Bar at all times, no matter what program.  And if you really can’t figure it out, Apple’s website is chock full of help documents and forums for solving the problem, not to mention Apple Stores and their Geniuses.  OSX is well-supported.

So, after evaluating Mac OSX with the Ten Heuristics, I’d have to say it scored pretty highly.  Aesthetically pleasing, easy and efficient to use, powerful and stable, and well-supported, once one gets used to Mac OSX it is hard to think any other system is better.  It’s not perfect, but I’d say the reason why most people don’t like it is simply unfamiliarity, or a general dislike for Apple clouds their judgment.  Can’t really fault the OS for that, though.

Mac OSX Snow Leopard screenshot