Monday, December 19, 2005

Game 5.0

A little theory that I'm working on:

Game 1.0 is abstract videogames with minimal or no aesthetic form (or 'fiction' as Jesper Juul defines it) beyond simple representation, where the objective of the game is contest-based, and essentially overtly like real world games and sports. Examples: Tetris, Chess, Virtua Fighter 2, FIFA, Counter Strike, Battlefield 2. The largest minority of video games past and present are probably game 1.0.

Game 2.0 is where the fiction acquires a purpose. In either exploratory or led forms, the universe that the player is playing within acquires more than just functional representation, and the player may well start to make play choices that result from emotive rather than analytical concerns. Players come to love and hate characters within the game itself (if there are any), and the game does not necessarily have an overall goal (although it usually does). Examples: Ico, Elite, Grand Theft Auto 3, Zelda: Ocarina of Time, Halo (single player), Resident Evil and Starcraft (single player). Game 2.0 games also make up a large minority of the video game lexicon, and between 1.0 and 2.0 we have probably over 90% of games today. Some games have aspects of both 1.0 and 2.0 like Halo's single player and deathmatch modes.

Game 3.0 sees the player affecting the fiction in a self-motivating manner, as it involves taking on the mindset of the creative and the carer. While the challenge element may still be present within game 3.0, the concept of winning or completion is much reduced or non-existent as compared to the over-arcing involvement ethic. Character empathy may still be a strong influence within the game, if the game has strong characters within it. Examples: Sim City, Animal Crossing, Startopia, Nintendogs, The Sims, The Movies.

Game 4.0 moves the player into the realm of a society. While other versions permit players to play with or against each other (teams in Counterstrike, co-op mode in Halo, item trading in Animal Crossing), game 4.0 expands the fiction to include the multiple, often many multiple, and so the fiction takes on a life of its own largely outside the creator's purview. This can also happen with game 3.0 (people coming up with novel uses of The Sims to create houses of horror spring to mind) but the difference is that game 4.0 is completely beyond the control of any one person. While creators or maintainers of the game may add to or modify the underlying structure of the game, the resulting landscape rarely comes about as predicted. Examples: Planetarion, Ultima Online, World of Warcraft, EverQuest and Second Life.


So why the numbers?

Well it's as good a grouping as any, and corresponds very very vaguely to the order in which they emerged as forms. (I know that some people might start dragging out a million historical examples, such as trying to pin down when the first MUSH came about, but again, I mean it in a loose sense).

There's also a sense of progression behind the order, in the sense that the key trait that it highlights is the ever-growing development of fiction in the video game, and the stalling of the abstract and the mechanical concerns. It does not mean to imply a sense of new-killing-old however, as that would be a preposterous statement given a cursory examination of the shelves in any games store. Game 1.0 and game 2.0 rule the roost there.

No, the key point here is talking about how the fictional element of a video game has gone from an abstract representation of something to shoot at to a multi-layered player-created world, and how this fundamentally changes the relationship of player, game, goal, gameplay and so on. It highlights how the terminology and ideas that underpin one version do not necessarily hold for another.

Game 1.0 is wholly dependent on gameplay, for example, because the whole structure is an abstract simulation designed to encourage players to compete and to win (or survive as long as possible). Game 1.0 advocates therefore champion gameplay and gameplay innovation over everything else. Control, response, reaction and rules are the things that really matter in game 1.0.

Game 2.0 also relies on gameplay, but the sense of gameplay is different. In game 2.0, gameplay comes to mean the broader idea of progression, of specific threaded challenges that can be set up one after another and which might even involve rule changes of the sort that goes against the concept of game 1.0. Discovery, opening up the fiction of the game to see what's there, is what keeps game 2.0 interesting. As a result, game 2.0 can survive and prosper on entirely non-innovative gameplay as long as the fiction is interesting.

Game 3.0 relies more on interaction than gameplay in the 1.0 or 2.0 sense, in that a breadth of options and creative tools constitute the game, and while the fiction of the game is mutable and reactive, it is not so in the sense of stated kill-or-be-killed goals. The play in game 3.0 consists partially of unlocking the fiction, but more about learning how to use the fiction. Game 3.0's play is centered on creativity and maintenance as its main goal.

Game 4.0's play is almost entirely reliant on the players and the society that they create. While the rules and structures of game 4.0 might help to induce certain styles of play (like levelling up), these often give way to purely social interaction, and creative group behaviours. The furthest along type of this game are efforts like Second Life, where all pretence of the need for such rules - except for an incentivising economy - are abandoned. In Second Life, players are encouraged to just be. Evaluating game 4.0 on the basis of game 1.0's sense of gameplay is therefore completely meaningless.

There is also a huge difference in players between each of the 4 versions, which is something that often goes unrecognised. The games industry and hobby are notoriously loose with their langauge and terms that mutate depending on the speaker and the listener.

The ur-phrase of the industry, 'gamer', is one that is about as misleading and ill-understood as it gets (and the subject of endless raging miscommunications masquerading as debate on the internet). What one MMOG player means when he's talking about what gamers like, for instance, is worlds apart from an arcade freak who loves his Streetfighter 2. They may both play games, but what they even mean by the word 'game' is enormously different. As such, I think these 4 types of game describe not only four different concepts of play, they also describe 4 types of gamer.

Type I gamers are probably best called 'competitors'. They are only interested in the competition, in the abstract scores and the achievements associated with that. To the competitor, whether the space invaders are ships or apples is unimportant, whether the two teams in Counterstrike are terrorists or dancing Scotsmen isn't really relevant. And whether the opposing sides in Chess are silent or scream when taken just doesn't register as more than a momentary giggle.

Type II gamers are better called 'adventurers'. Their general motivation is exploration, discovery, and getting to the end of the game if there is one. This may or may not involve a story or some other narrative thread, but the key is that these gamers are engaged with the fiction of the game as much as the abstracts. Adventurers tend to dislike games that break the 'spell' by reminding them that they're not in a fiction, but are rather just playing with a set of virtual objects.

Type III gamers are better called 'growers'. They're playing their games because they want to play with them rather than against them, to make and do and look after the game like an organic pet or toy. They may want to defeat the challenges in the game, not to win, but so that their creation can be better.

Type IV gamers are better called 'actors'. They're playing to be a part of the game, which can mean active roleplay or (more often) as an extension of themselves via an avatar in another place. They form relationships, bonds, engage in teamwork, sometimes fight, sometimes build, and essentially just become a part of the fiction itself.

None is the true 'gamer' and none of them holds precedence over the other, though they do fight each other and call each other names out there in webland. They're all gamers, but they're as completely different as hip hop and metal fans, and each attracts its own cultural tropes, its own gender balance, its own type of media coverage and so on.

The types are also not wholly exclusive. I think I'm an adventurer for instance, but I am sometimes partial to a bit of FPS deathmatching, the odd racing game, and I used to very much enjoy a multiplayer game of Medieval Total War.

Now who wants to have a go at the fun part?
Game 5.0



PS:This is probably my last post before Christmas as I'm flying back to Dublin for some family celebrations and old-friends shenanigans. So a Happy Christmas to all!

Particleblog's comments have moved to The Play Room.

Sunday, December 18, 2005

John Spencer RIP

At least once every other day, my girlfriend and I sit down to watch a batch of The West Wing on DVD and experience genius. The first three and half years especially are pure gold, and even though the series did decline somewhat in the post-Aaron Sorkin/Thomas Schlamme/ Rob Lowe era into a more event-driven politics show, it has remained a firm favourite. In particular, I have always had a strong affection and affinity for Leo, the chief of staff, a father figure played by the amazing John Spencer. So I'm greatly saddened to hear that Spencer died recently.

But there's a little more to it than that for me on a personal level. I've always identified with this actor more because he reminds me of my father. There are some striking parallels. He was 58, my dad is 63. He was a recovering alcoholic as is my father. They're both actors, and they both have that post-AA way about them that can be inspiring and infuriating in equal measure. What was amazing about Spencer's portrayal of Leo was of course that he could bring so much of himself to the role (or so I believe) and in many ways the character seemed to be the most realistic portrait of a recovering alcoholic that I've ever seen.


The tragedy of it of course is that Spencer was only 58 when he died, and that frightens me for my own family. While change is inevitable and death comes to us all, we always want to think "not yet". We always want to stave off the inevitable present where things change irrevocably, but there is nothing that we can do to stop it. The challenge in change is not that it comes. We grow sick, our loved ones get killed in car crashes, our kids die of leukemia, we get AIDS, we lose a leg, we lose our financial worth. The challenge is how we deal with it.

In writing this post, I'm trying to write out my thoughts to understand what feels like an abstract loss and yet a personal one. John Spencer the actor, the man who lived in Los Angeles, meant very little to me. John Spencer the archetype who filled a heroic role in my daily life meant a lot. Mythology and story have always played that important role in our minds, and mine is no different. We relate to the media space as a mirror of ourselves, both in the news and in fiction, and this is why it matters to us so much. This is why there are outpourings of grief when symbols die (like Princess Diana).

So I've lost a symbol, and that makes me very sad. I've also felt the closeness of death in a strange way as it makes me reflect on what I have within my life, how my real-life Leo that is my father is important to me, and I don't want to lose him yet. It makes me reflect on the importance of the present (see previous post) and how this really is all that we have. In a year that has seen a lot of symbolic 'good' people die, this is a poignant and personal way for it all to end. And I do hope it is the end.

So, yes, I hope that Spencer rests in peace and found peace in his life. I hope that we can all find that sort of peace in our lives and find a means to become what we can be and what we are meant to be. I hope that we are all able to understand the challenge of change when it comes, and we can all live for today. AA teaches that living for each day is important, a faith in the wider picture is important, and the present is important.

Live in the present, my friends.
Live.

Particleblog's comments have moved to The Play Room.

Monday, December 12, 2005

Uncreativity and Generation Zero

So I'm having an msn chat with a friend of mine all about why there are so many license movies and games these days. Specifically, I'm talking about movies like the entirely blah Narnia, and he's pimping the teaser trailers for Superman and the new X-men film. Remakes seem to be everywhere, in music, in film, in re-set comics universes and games. Licenses likewise. It seems impossible to get a new idea out of the gate these days if it is not based on something that is already known.


We hit onto the topic of creativity in general. Weirdly, given the amount of power that recognition value seems to hold over us, we live in a time when we are inundated with creative options. We have PDAs to write our novels, DV cameras to make our movies, software that can create virtually any music that we choose, programming languages to create any game we can think of. We have the broadcast means via the internet, p2p networks and so on. We are literally sitting on an embarassment of creative riches.

Our societies similarly have come to embrace openness of thought and idea in ways that were impossible for previous generations. From this modest-priced PC in my room I can access a vast library of information. I can get news from anywhere, I can find discussion groups and forums on any subject. I can order any tool that I need and the modern media is highly free thinking in any one of a dozen directions. I can read anything and I can write anything, and no subject is taboo. I have gay friends, Buddhist neighbours, an empowered vegetarian girlfriend, it's all going on.

And yet we seem to be a profoundly uncreative generation. In the games industry the hot topics of new game creation all center around product design methodologies. In film, it's focus groups and properties. Music seems built on three pillars these days: the pop cover, the dance remix and the hip-hop rip-off. It goes further than this. We seem to have lost the idea of creativity with depth, so a lot of the material that is original is corny and based not exactly in the recognisable, but close enough. Like WW2 games that seem to spend their days copping a feel of Private Ryan's nuts. Or the Incredibles.

There are even a few noble examples amongst all this. My previously cited example of the new Galactica show is good one. The X-men movies actually make a good fist of it too, and some of those dance remix/resamples are cracking tunes. The Zelda games continue to inspire. The League of Extraordinary Gentlemen comic is basically a really cool mash-up.

Our society reflects a strange dichotomy of the possible and the old though. You will see more superhero movies, you will see Terminator 5. You have seen an exploitation of the Exorcist property and the Omen remake is around the corner. Where the film-from-a-book was often a derided practise except in Kubrick's hands (and he basically remade them from the ground up), now we are reviewing and praising these films based on how faithful they are to the source material - regardless and seemingly unaware of how bad or good the resulting movie is (usually turgid).

And here's another realisation. This past-mining trend works. It works and it works really well. Between the combined sales of DVDs, theatre tickets and merchandise tat, a known property can make an absolute fortune. We are not Generation X, we are Generation Retro. We think almost subconsciously about our entertainment in terms of whether we recognise it first and foremost. We objectivise what we see, which almost ruins the chances of good ideas making a splash, and we almost 100% predictably plum for recognition over new.

Some of this is wrapped up in sales technique, no doubt, because it's easier to make a trailer that starts with 'from the novel by' or 'from the creator of' or 'inspired by the hit show' or whatever. I don't blame the studios for buying into a trend that is palpable, because they are there to make a profit.

It seems to me that the problem lies with us ourselves. Despite our liberal outwards and our new society which values multiculturalism and our new technology that literally places the world and our talents at our fingertips, it seems we are in fact an incredibly conservative generation. This may come as no surprise to some, I guess, but it seems as though not only have we opted for safe over strange, we've done so to such an extent that we've forgotten what strange is.

Maybe the root of the uncreative problem is the embarassment of options itself. Maybe it's a profound lack of a spiritual connection that our generation seems to have tapped into robbing us of any sense of courage for the future. Maybe there's some weight to the idea that art and suffering are very deeply linked, or maybe the problem is that we are so media literate now that we only understand reference.

Ironically, if we look a bit further back into the past, we can see that this sort of thing has happened before many times. Modernism constituted a reaction against a staid late Victorian mentality. Postmodernism constituted a reaction against a conservative and violent 40s and 50s. Romanticism was a quite revolution against the intricate establishment aesthetic of its day by re-introducing natural poetry over highly knowing referential poetry before it. The creative bubble of society seems to wax and wane depending on trying to get out from under the binding and increasingly brittle precepts of the old.

Generation Retro seems to me to be the tail end of the postmodern idea. What started out as a movement to break down the intricate symbolism of the past has now resulted in a generation that reveres a set of disconnected symbols instead and has gotten to the point that self-reference is built not on witty ground, but on the ground of faithful recreation and solemnity. How ironic in and of itself that many have chosen to call this decade the 'Noughties' when they are anything but. More like the 'Zeroes'.

And yet even calling for a change of ideas and new theories is in and of itself a postmodern idea. It's objectivising the creative, no matter what level you look at it from, to say 'what we need is a new Romanticism'. In the age of everything recycled, even thinking in the terms of 'what we need is a' is already buying into self-defeat and propagation of the product design/IP idea. It's a profoundly non-new way of thinking to say that what you need is a new way of thinking. Everything becomes a new branding exercise or theoretical discussion or Wired trend. A few have tried punching through the barrier, such as the transhuman idea or the posthuman idea, or even the transmodern 'bring back spirituality' idea, but these are all still inherently postmodern notions. It's all still mash-up, it's not creative.

There's one idea in Buddhism that particularly intrigues me, which is the idea of turning the mind off. Buddhism in general fascinates me, but in particular I had always assumed that meditation was in fact the act of quietening the mind by essentially taking the time to close your eyes and let your thoughts sort themselves out. Not so. Buddhism seems intent on dissolving the conscious mind in total, or rather, allowing a kind of true consciousness to emerge rather than a mind-dependent awareness. Buddhism seems to advocate a stance that most of us are actually unaware and unconscious all the time, that our minds and our egos are so busy reasoning and rationalising everything that we have no true awareness.

I think that there's something in this idea as regards Generation Retro, because the second thing that I've picked up from this is that the main reason that the mind is so engaged is because we fear change. We like control, we like our computers that we never use, our PDAs that sit idle for months on end, our PSPs that become toys for about three months and our DV-cams that we use all of twice. We like the sense of power over our own destiny that all this capability brings, but what we don't like is change. The reason that the retro-themed marketing works so well on us is that we want it to remind us of safety. It's a scary world out there, after all.

Mostly, what I think I'm talking about here is connecting with faith and instincts here. Not so much faith in God, just faith in faith. And instincts as in learning to understand what we feel rather than what we say to ourselves that we feel. There is an expression which says "Those who know don't speak, and those who speak don't know" which is humourously relevant to me while typing all this, perhaps telling me how little I know (after all, I am quite the talker).

Does it even matter that we are an uncreative generation anyway. The world is wracked by AIDS, impending wars, oil shortages and hurricanes. Does it matter so much that Joe on the street likes to go watch Narnia and maybe think back to nicer times?

Yes, I think that it does. Quite aside from the psychological unhealthiness of the symbol of living in the past, and the flip side that the technology promise makes which constitutes living in the future, we seem desperately unable to connect with reasons to live in the present. With all manner of Office-style grind on display and an increasingly vaccuous middle ground in most areas of society, I think creativity matters very much. I think self-acknowledgement matters a whole hell of a lot too, and we're losing track of both. Depression is up, stress is up, terror from planes crashing into office blocks leaves us unable to raise a passionate and clear argument as to why torture is a bad idea. We're all off in a hundred other places other than this one.

Generation Retro is all about being elsewhere.
Generation Zero could be about being right here.

So what is it I'm saying? That we need a new way to think?

No, I'm saying that thinking is the problem. Reasoning and rationality is itself the problem. Articulating meaning and quantifying creativity is the problem. Methodical attitudes are the problem. Self-prescribed rule structures for how things are made is the problem. Reference itself is the problem. Reaching to define our new era with the tools of the old is the problem. Casting creative efforts in an IP/genre/X meets Y framework is the problem.

We need to stop being overly rational and media-aware and start becoming conscious and actually-aware. Unlike in the modernist era or the post-modernist era, the pressing need of today is not to frame our era nor understand the frame. The pressing need is to stop looking at the frame and start looking at the picture. Stop recasting ourselves in the cloaks and symbols of bygone decades and start realising that we don't live in the 20th century any more. We should be conscious of the present, centered in the present and engaging with the present and leave the past and future be for a while.

Particleblog's comments have moved to The Play Room.

Wednesday, December 07, 2005

The Three Rs

A revolution is a moment in history when the existing political order gets up-ended entirely, new thinking and a new sense of values, power and social order emerge, and is often preceded by violence or generally associated with same. For example, the French Revolution.

A reformation is in many ways the opposite of that idea, as a reformation is an attempt on the part of one or several segments of a society to reassert an original set of founding ideas or aesthetic and essentially re-establish what was (or what is believed to have been). Again, often violent in nature, although a restoration is a gentler form of reformation. For example, the Protestant reformation of 16th century Europe.

A renaissance is somewhere between these two extremes, and usually non-violent. A renaissance is essentially a period of rediscovery of older ideas, but also of putting those older ideas into new uses. Renaissances don't, on the whole, involve the large scale return to the old ways that reformations do, nor up-ending everything like revolutions, but rather pick and choose the best of the old and make some great new as a result. Such as, of course, the renaissance.

So which way is the video game headed?

Well, on the one hand we have the Revolution on the way (although whether it actually is a revolution or not is an open question). On the second hand, we have quite a few seriously 'old skool' indie developers trying to get us all back to the way things were, which might be classed as reformation. Lastly, we see the re-issuing of some old classics in new forms (Prince of Persia, Resident Evil 4), which might, at a push, be called renaissance games.

Is the video game heading any of these ways?

Particleblog's comments have moved to The Play Room.

Friday, December 02, 2005

Game City

I've been talking to a friend of mine about his job and having been in it only a short while, he thinks that he's not gong to stick it out because of the commute. For reasons that are non-negotiable, he's not able to move close to the job, but he's able to get there via car or train. This being Britain though, that basically means a 4 hour round trip. I sympathise, and I asked him whether they would allow him to distance work for some or all of the week. He doubts it.

In Britain especially, game developers and publishers have a singularly annoying tendency to locate themselves off the beaten track, preferring to set themselves up in small towns and on the edge of cities and so on right across the whole country. There are usually a combination of reasons for this. The first is the cost of renting, but the second is usually something like it's the company founders' home town (especially in smaller companies). Thirdly is the confusion that many developers and publishers seem to still experience over identity. They want to be like entertainment companies in the marketplace, but still instinctively use much of the methodology of software engineering firms without realising the enormous differences. The games industry is an entertainment business, not a technology business. It's amazing how many developers still don't accept that.

This is the sort of behaviour that is fine for young startups, but there's a real problem that the industry has to face up to, which is that development staff tend to move around between companies because of costs, but they also get older, form relationships and families, and it's unrealistic for companies to expect them to move house every year from Aberdeen to Cardiff, Cardiff to Brighton and Brighton to Birmingham any more.

At the same time, it's unreasonable for the prospective employees to look for long term (or even permanent) employment contracts with games companies any more, because of the costs issue. This is what leads to a strange scenario where games companies are on the hunt for staff who'll move across the country for 6 months. Fine enough if you're a youngster, but once you hit 30 it becomes about as attractive as scabies. It's one of the chief reasons why something like 90% of games industry workers leave the industry within 6/7 years, never to return.

It's also hugely inefficient for the companies to work this way. Hiring is an expensive and time-consuming process, and has become even more so with the proliferation of employment agencies charging their 20% cut to get the people in the door. Maintaining large office spaces is also not for the financially faint hearted, and the impact on schedules from sudden departures, staff who don't work out, and sundry other reasons is hard. Finding quality staff tends to become very expensive as a twenty-year experienced programmer will charge the Earth for his services and you'll have to pay it if you want him to move to your office in Slough or Leamington Spa or Croydon. Add in travel costs to visit publishers, problems with an inability to find very short term workers (like quality sound engineers), and the companies have painted themselves into a corner where everything costs a fortune without any flexibility. Development companies (and publishers lately) can moan about costs and industry conditions as much as they like, but it's their own strategic decisions of location that breed these problems.

In the US, Hollywood is known as the place to go if you want to work in the movies. New York is the center for the financial industry, the news industry and business in general. San Francisco is where you go if you want to work in technology development. In Britain, London is where television is at. London is also where magazine publishing largely operates out of. In fact, in many industries, especially entertainment industries, a common location is vital to the relative success of everyone, and with good reason.

A common location reduces the price of doing business by encouraging flexibility.

It makes hiring staff much easier and cheaper because they demand less and are more willing to work under shorter term contracts. This allows both developers and publishers to maintain smaller facilities and thus be more flexible if projects collapse. A common location allows for a better freelance culture to emerge, which encourages workers to stay in the industry for longer. If I, as a worker, have three four-month contracts during the year then I am much more likely to be excited by the prospect if each one is commutable from my house. If, on the other hand, the first one wants me to move to Swindon, the second one to Scunthorpe and the third one to Dundee, I'm just not going to seriously entertain the idea.

Common locations increase contactability between publishers and developers, allowing for a much more co-ordinated industry. It allows for real business networks to develop and a community of companies to establish themselves. It encourages better working practises as the reduced timeframes encourage more focused work and therefore productivity. It makes the prospect of using remote workers far more palatable and therefore reduces the inherent risk of moves like outsourcing. Mostly, it encourages both competition and collaboration on a realistic level. Development companies can collaborate on projects, for example, if they're only up the road from each other, while it also facilitates the creation of service businesses like art and animation houses, design consultancies and engine specialists. A common location would encourage PR agencies to seriously court the games industry as a set of viable clients in ways that e-mail communications never do, and also encourage a more vibrant and in-touch industry media to develop.

The best part is that all over Britain, urban regeneration projects sponsored by local councils are practically crying out for industries to set up in their area, providing all manner of discounts and tax-rebates to businesses that do choose to relocate.

The big question is where should Game City exist. Realistically, it should be a major city that has international airports. It should be a city that has representation in other media industries on whom the games industry relies for much of its licensing etc. It should have good transport connections so that staff can get to and from work with a minimum of fuss. This realistically means that the Game City should probably be London, possibly Edinburgh, or maybe Manchester at the outside. Or, if anyone feels like a trip abroad, Dublin.

London is not as scary as it sounds. While the rent in London is supposedly high, south London is less expensive and is replete with councils looking to get business to move in. Southwark, for example, is only just across the river Thames and yet the rent there is really very cheap by comparison to the rest of the city. Southwark is also 15 minutes by Tube from Soho, Shoreditch and the media capital of the UK. Or there's Lambeth, Croydon, or any one of a number of areas in East and North London.

Regardless of which city and which council, the question is whether the British industry has the sense to club together for change. There is an enormous lack of trust in the British games industry, and a deep level of organisational inertia pervades many companies that have gone too far down the route of establishing many remote studios (a costly and questionable move). There has also been an enormous level of closures and company collapses in recent years in the UK, and would-be industry-wide associations like Tiga are hugely worried about the permanent prospects for UK development yet heavily mistrusted by the companies themselves.

The key word here is 'entrenchment'. The established UK games industry is too entrenched, stuck in its old form thinking and essentially being torn apart by its own intransigence. This applies to managers and employees, publishers and developers alike. They're just too cynical perhaps to do it differently. Nonetheless, I think Game City will eventually happen. Old companies will die out, new companies with more nimble philosophies will rise in their place and they won't be so tied to the old garage days mentality. There's a younger generation of creative staff who never worked in the bedroom coder universe, and they have less qualms about trying something different.

This is why I think 5 years from now the UK games industry will rise again in a new form, a better form representing a shift in the generational mindset. As the winds of the current and next generation continue to blow their harsh message, these proto-company owners are experiencing first hand what it is to work under the current regime and they know that things can work much better if they are only given a chance. That chance will come soon.

Particleblog's comments have moved to The Play Room.

Saturday, November 19, 2005

The Final Generation

Here we go again. Hardware launches ra ra ra. Hardware-centric branding pushing a false generation war ra ra. Hardware promises co-opting the language of game developers to sell a nonsense vision (online, super-realism, emotion etc) ra ra.

Here's what the next generations are offering for real. More polys, more pixels, on-line play and a controller that lets you, you know, move stuff about on screen. Whoopdefucking do.

I am so... tired of this nonsense. I am so bored of watching the same little bit of history repeating, with the same cod arguments, the same lies, the same messages and the same complete lack of anything really INTERESTING happening. I'm tired of watching developers prostrate themselves before the temple of marketing and watching formerly perfectly good creative people turn into hollow replicants of their former selves. I'm tired of watching journalists become ever more complicit in the squeeky wheel and grease show, and I'm tired of watching an industry regularly deceive itself, blame external forces and otherwise try and pretend that nothing has ever changed.

And I'm jealous.
I'm very jealous of watching other media move on. I'm watching the new Battlestar Galactica today for the first time (always last to the party, I know) and I'm loving it. Normally I regard any remakes as suspicious (this is something that the games industry teaches you after a while) but in Galactica I'm watching a properly conceived bold attempt to really work with material to make something new. I saw Serenity the other week and had similar feelings. I watch US drama shows all the time (Lost and Rome being current favourites) and again, I am amazed at the way that this medium has moved on. And it makes me very jealous. And wanting to make my own shows - but that's another story.

It's a real Gordian Knot that the whole field of videogaming finds itself in because there are several competing forces at work here, none of which is actually healthy for the games. These forces are what drive the generation cycles. PS4 is already on someone's drawing board, as is XBOX5, DS3 and whatever else. Yet any fool can tell you that the current direction of the industry is ultimately going to lead to the death of everyone bar the hardware makers themselves.

On the one hand, the business forces have absolutely come to depend on hardware sales now and second hand sales to offset the public's increasing weariness. There's the manufacturers, whose dominance struggle is beginning to look desperate on all sides, and who are now launching new hardware seemingly every other month. There's publishers, who are increasingly looking for ways to look sweet enough to be bought because they know full well that the costs problem which killed developers by the thousands over the last three years has come knocking at their door. There's the small developers, who've gotten bought themselves, or sidestepped the main industry to go off into mobile land, casual land or budget land - and are finding the same problems there. There's the indie developers, who's belief is driven by a need to return games to their past, and who focus on making 'true' games. And at the center of all this is the sense that maybe the problem is simply that there isn't enough money to go around.

If history has taught us anything, it's that this sort of pack cannibalism is not something that can exist as a permanent mentality. With virtually everyone in the different corners of the industry now entrenched in their position and playing an extended game of Russian Roulette, the industry won't survive in the same shape in which it currently does. As with any form of media entertainment from wrestling and porn to modern art and cinema, these things have a tendency to balloon, burst and then reinvent themselves in a new form.

Well we need our reinvention, but it isn't going to happen before we balloon and bust first. This new generation, as with other generations, is just another turn of the screw. It's not something that *can* be solved by untangling the strings and everyone being reasonable (as is often expressed in forum discussions on the fate of the industry).

We see the same threads and blog posts again and again talking about the nature of the industry, and if only the industry could be made to see sense, and if only it could be made to do things in a reasonable manner, and if only and if only. It can't.

The parties are too entrenched. Like the first world war, the conflict now seems so insane, and yet the fact that the enemies of every faction are all staring each other down compels them all to fight to a bitter end that may or may never come.

Alexander the Great didn't solve the Gordian Knot. He just cut it in half with his sword. The symbol is obvious. Sometimes messes get so convoluted and mixed up that the only solution is a clean break. There are times when the only sane course of action is an unreasonable one.

So we head into generation six or seven (I forget which it's supposed to be) full of fear. Fear for our hobby, fear for our direction, fear for our jobs and livelihoods. The trench guns are firing, the mad charges have begun. The clarions are ringing around the ramparts, and there we are. You, me, a bunch of other guys, dressed in regulation hoodies and sneakers, with shaved heads, mortgage payments or rent, our thirtieth birthdays whizzing by with maybe a kid or two. We all know that it's madness, but we're going over the top anyway.

See you on the other side.

Particleblog's comments have moved to The Play Room.

Monday, November 14, 2005

Label names

I've gottten a lot of interesting feedback on the trade label idea, both in comments forum posts and through private email. It seems about 50/50 in replies. Some people really like the idea, some people thin it's elitism writ large. A lot of people are worried by the idea of who exactly would do the voting, and more than one poster has questioned the sheer validity of the idea at all.

Which is all fair enough.

The simple goals behind the suggested project are to draw the eyes of consumers to games, films, books and whatever that they might find imaginatively interesting. There really are no snootier ambitions to it than that.

So I've been thinking of what the name of the label could be. I wanted to something light-hearted enough that it didn't sound pretentious, something straightforward but slightly witty, but ultimately as easy-to-understand as "Fair Trade".

I like the name "Not Dumb".
What do you think?

Particleblog's comments have moved to The Play Room.

Sunday, November 06, 2005

A Trade Label

I've had an idea in the back of my mind for a little while which I think I'll share.

One of the things that annoys me about our culture (and especially the media within it) is that there's an increasing dumbing down/juvenalisation of the whole thing. You have increasingly intelligence free music, games and so on taking up more and more of the shelf space, and the consumerism inherent in that sort of culture essentially does what it does in any other market: It makes it harder to find the more interesting, niche stuff.

I for one believe that there is such a thing as Quality in culture, an indescribable sense of creative thought manifest, from Ico to West Wing to whatever, but I think it's getting harder to figure out where the quality is at.

So I thought: In the food industry you see these specific labels now popping up, like 'Organic' and 'Fair Trade' which are intended to draw the concerned consumer's eye and point them toward quality. In the first instance, quality food, and in the second instance to let the consumer know that the food was not procured by bleeding some farmers dry. Both are successful minority initiatives. They haven't dominated the landscape, but they help.

So I'm thinking, why not start a trade label like Fair Trade which tells media-buying consumers "This media actually has some creative brains behind it and will speak to you like an adult". Not a judgement on the content from a political or whatever standpoint, but simply a message label that says "We think this is an honest attempt at art and/or entertainment".

Ordinarily that would be the province of reviews, wouldn't it?
Well in recent years it seems to me that the whole structure of reviews and review journalists has essentially become untrustworthy. There are too many rent-a-reviews floating around now to give any clear indication of anything, and while sites like metacritic provide a summary, they're still poviding a summary of a skewed data set.

So what I'm suggesting is anonymous groups of maybe 30 people over the age of 25 who evaluate pieces of media and vote whether to approve it, according to a set of criteria (there may be a group per subject, or several if it gets popular, though not split by genre). At first its unlikely that such an effort would be taken very seriously, of course, and its recommendations would likely as not just sit on a website for people to browse.

But over time the idea would be to allow the label of this group of people to be used by manufacturers under a free copyright license on approved products only, be they books, films or whatever.

Interested?

Particleblog's comments have moved to The Play Room.

Tuesday, October 11, 2005

Comment spam

I'm going to have to turn off comments for the moment to stop the spam.
If you have any comments on any articles, feel free to email me and I'll compile a proper post of feedback etc.

Particleblog's comments have moved to The Play Room.

Tuesday, October 04, 2005

Cost/Benefit Analyses

Dean Takahashi recently gave a speech questioning why it's so hard to fund videogames (Why is it so *****g hard to fund videogames) and brings up some interesting points about why the videogame sector, whose software market is worth 18 billion dollars a year worldwide, should be funded in a proportional manner to the film industry (which Dean pegs at 60 billion dollars). He especially wonders why it is that the mobile games industry seems to gain a lot of attention, despite the fact that its market is currently a good deal smaller.

Firstly, I think that Dean's got his figures for the game/film comparison wrong. It is very difficult to gauge accurately, of course, but I've read that over 3000 films had been released worldwide in 2003, and figured the size of the market to be somewhere on the order of 180 billion dollars worldwide. Dean's figure of 60 billion is, I think, the size of the Hollywood chunk of the market. So what he's doing is comparing the worldwide games industry to the American film industry, and this creates a false impression, much like comparing the games industry's total hardware and software does when comparing it only to box office revenue or something similar.

It seems that the worldwide film industry dwarfs the worldwide game software industry by a factor of ten. So that might be one reason why films get funded and games do not. On three thousand titles, the film industry makes ten times the income that maybe half the titles across all console and PC formats do.

On the other end of the scale, the mobile industry may be much smaller, but the games themselves are very cheap to produce. Mobile games are a return to retro in many ways, in quite a few cases literally through ports of old games. This allows for wide portfolios of cheap product which the consumer can then buy for what seems to them to be a small fee, yet which in reality translates to an awful lot of business for the mobile provider. It is therefore potentially a hugely profitable sector of the market, and that's why investors love it.

The key phrase here in understanding why mainstream console and PC games have it so hard is “Cost/Benefit Analysis”. It's the central phrase under which all business operates in the world today. What does the product cost? What is the potential benefit? Does one justify the other? Simple.

The cost/benefit analysis for films is a good one. Contrary to popular opinion, most films do make their money back in the long run. If you only read the popular press, it would be easy to derive the impression that a film which tanks at the box office is effectively a dead duck, and that the film industry operates on the basis of trying to score one big hit in order to pay for nine loss-makers. This used to be the way that the film industry worked back in the seventies and early eighties, but the arrival of home movie formats (like DVD, pay-perview) changed all that.

Every film that hits the cinema, no matter how bad, will then make its way onto DVD and pay-per-view. Once films have been produced, they become and eternally recyclable property, and that long view approach is the reason why films can have budgets of 200 million dollars and be considered unexceptional. A lot of films from the fifties, sixties and seventies are still making money for their owners today.

The videogame industry's problem is that their cost/benefit analysis does not work for any sort of outside investment model. Videogames are not eternally recyclable properties because the hardware formats keep changing. Videogames need to be updated, ported or emulated if they are to be kept current, and this costs too much for too little return for most publishers. The market for videogames is small, yet the costs of the games has pushed very high compared to the size of that market. This creates a cut-throat and risky market where games which get released and sell 500,000 units are in fact considered to be failures. This is why licenses and other properties hve become so important.

The film industry can turn 3000 titles a year on budgets that are unlikely to average more than 10 million dollars (think of all the Bollywood, B-Movie, Asian cinema and so on) and rake in 180 billion off the back of that.

The games industry, whose market is only ten percent of that size, should be looking to work with equivalent budgets. The average cost of a game in the PS2/Xbox/Cube era should in fact be no more than 1 million dollars. Mobile games are cheap, budget games are cheap, casual, indie and web games are cheap, and all those sectors show vitality. Console and PC games, however, are sickly, strained and increasingly pressurised because they simply cost too much to make. The cost/benefit analysis in those sectors is very bad.

If we look back to the PS1 era, it was a happier time. Game development was still a good busines to be in, with good prospects. It resulted in a lot of games, both good and bad, and a liberal investment culture where publishers were far more easily disposed to funding odd ideas and seeing what stuck. The cost/benefit analysis was good because the costs were cheap.

Today, though, the cost/benefit analysis works against us, not for us. Costs have raised 300% while the market has increased maybe 50%. Whereas companies could afford to take the hit on a couple of loss making games just to see which ones became hits, now they are terrified because every release that doesn't score big is likely to bust them. Any investor worth his salt knows a bad deal when he sees one, and the mainstream games industry of today is a bad deal.

So what's to be done?

Two things:

1.Costs must come down
2.Games must become eternally recyclable

The cost and benefit must improve.
The big question is HOW?

Particleblog's comments have moved to The Play Room.

Friday, September 16, 2005

Nintendo: Breadth vs Depth

I assume that I don't need to tell you the news. Yeah, the controller.

The very presentable Satoru Iwata stood up in front of the world (well, the gaming press) and unveiled a controller which aims to be as intuitive as possible, and produced a device that combines the basic intuition of a remote control with a 3D sensor arrangement that facilitates physical gaming. This is an idea that has been gaining currency in the industry for a few years (see one Eyetoy, for example).

He then made a very strong argument of the need for games companies to stay innovative above all else. Without constant innovation, Iwata claims, the games industry runs into the brick wall of boredom. If we don't stay constantly innovative, then the gamers will walk away.

Hmmm.

The reaction of the games press, blogs and forums has been (from what I've seen) overwhelmingly positive. There are a few flies in the ointment, mostly people wondering whether a controller like this is doomed by the way that might cause arm fatigue, or deadpan decriers calling it essentially a Powerglove. A few have pointed out that the big mountain to climb is that Nintendo have to convince developers to develop for it instead of, say the xbox 360, because it'll be so different that simple porting won't do it.

But on the whole, as positive as a plus sign. Everybody wants one. The Guardian gamesblog commented "Yowza - breaking free from the PlayStation benchmark or what?!" Meanwhile Kieron Gillen exclaimed "Whether they get it or not immediately divides the entire gaming universe into cowardly, tedious luddites who are perfectly happy to sit in their squat-like holes forever and Good People. If you don’t like the Revolution controller, you are fundamentally part of the problem and killing the fucking art form."

Hmmm.

Physically, I think that the object does look like a brilliant controller. I can really see how the motion that it works with will require a whole new way of thinking in game development. A classic example being the suggested beat'em'up wherein the player swings a sword. This is the sort of controller that allows for kinetic gaming within a 3D environment. So yes, it is going to be a challenge. And the fact that it will be included in every box ensures that new development will happen. It's not like a light gun or a dancemat in that respect. So a change is coming.

Hmmm.

My question is this: When does the games industry get beyond the need for novelty and grow up?

Grow up?
Yes, grow up.

Here's the thing: Nintendo are essentially painting the future of gaming as one of simulation and re-creation of activities etc. It's the natural progression from the EyeToy, from the enthusiasm for physics-based games, and so on. Increased interactivity and intuitive interactivity are very much the rage in games.

But I think that there is positive here, but there is also negative. This quest for the ultimate interactivity is leading development on a certain path, which is to attach their futures to the idea of the game that can do anything. In actual fact, every manufacturer is pushing this technology-based vision for gaming at the moment. The more fully interactive a game can get, the better.

I disagree. I think that what the interactionist camp are doing is sacrificing progression for interaction, and what they will end up doing is sacrificing the deep single player experience for a broader list of options. In otherwords, breadth rather than depth. Our gameplay may only last for two hours, but look at all the different stuff you can do.

Gillen makes the claim that people who think that the controller is a negative are killing the art. Really? I don't think so. What IS the art really? Is the art to come up with ever-more-convincing and fully featured ways of playing the same games that we've been playing for twenty years, or is the art taking the limited canvass that we have (and it will always be limited in some form or other) and actually making something with meaningful depth? I think the latter.

The thing about novelty is, it gets boring. If you're selling yourself on breadth all the time, there comes a natural terminus where you run out of options. If gamers are only used to novelty then, then THAT is when they will get bored. Battlefield 2, case in point. What on Earth is the next step forward from Battlefield 2? Even more vehicles, even more guns, even more even more even more?

Depth is what is ultimately interesting in games, not breadth. Mission design, progression, how the concept develops over the course of playing it, that is what's interesting in games. Breadth of novelty is not interesting in a long-lasting way. Eyetoy: Play is only short-term interesting unless you're one of the few who likes to master everything. Most don't, and that is where I fear the path of novelty is taking us. To jaded players who don't really find much long-term play in games, and come to assume that all games are thin.

Of course, I may be utterly wrong. It's so early in the development of this and the other consoles that it is hard to make predictions that do not sound either gushing or dire.

Well done Nintendo, you've done it again.
But what exactly is it that you've done?

Hmmm....

Particleblog's comments have moved to The Play Room.

Friday, September 09, 2005

Gaming for Columbine

Question: How long before someone in a studio (either publisher or developer) flips out and shoots up their office?

Particleblog's comments have moved to The Play Room.

Wednesday, August 31, 2005

Actors in Games

A few years ago, as long as ten maybe, when console and PC games entered into their CD-ROM era, videogames had actors in them. They combined film footage with gameplay, to mixed effect. You had some passably good ones, like the Wing Commander 3 and 4 stuff – which though not high drama was at least watchable – and you had some pretty awful stuff, like the interactive movie fad. And then games stopped using actors (for the most part) to tell their stories.

Obviously in some cases there was no need, like Grim Fandango, because the game was largely in the 'animated' style anyway. In many cases, games moved into the territory of trying to recreate humans for their stories using animation instead. As games increased in their technological oomph, they could get more and more realisitic characters on screen. Voice acting remained, but increasingly the action consisted of these cgi people talking and shooting and whatever.

Today, actors have all but disappeared from the visual side of games, although many are doing quite well out of the occasional voice acting work that they garner. Games like the Metal Gear series, the Grand Theft Auto series, Max Payne, Resident Evil, Halo and many others now use exclusively animated actors to tell their stories, costing millions of dollars in animation and time.

The problem is that they don't really work.

Emotional Connections

Game directors want more emotional connection in their games. Emotional connection, it is thought, is the key to games becoming successful in the wider media. If we can enjoy games on a more-than-just-fun level, then they and we are the better for it. This I broadly agree with (although not with the idea that the emotional connection is therefore a heroic one).

Cinematics and the emotional connection idea of are often closely tied. In the last few years, some of the most powerful games have featured a variety of cinematic moments. Ico and Rez, for example, use sparing but well-judged scenes to maximise impact. Grim Fandango is far more famous for its characters and story than its puzzles.

What these games, and many other like them, have in common is that the characters in them are not realistic humans. Grim Fandango's characters are all skull-dolls and big demons. Ico and Yorda are very ethereal versions of a boy and girl. Go further afield from videogames to CG animation in films and you see a similar trend. Sully, Gollum, Woody and Buzz and, yes, even Jar Jar Binks are all successful 3D animated characters who are not realistic humans.

Compare the emotional connection of those creations with any realistic human. There is a noticeable difference. Re-created humans look weird. Their lips don't quite fit, their eyes aren't quite right, their hair doesn't feel real, their body weight seems floaty. They get more and more detailed, yet still all we can seem to notice is that they're not quite correct. It bothers us.

Even today, we have the upcoming release of the Godfather game, with that picture of the CGI Marlon Brando. He doesn't look quite right, but why exactly? We watch the extraordinary lip-syncing and such in Half Life 2, and yet it doesn't quite work. It seems a little spooky, and we can't quite get into the story as we might because of it. When watching the demo video for 'Dawn of War', does anyone else find it odd that the orcs seem more real than the humans?

And yet, when we look at Monsters Inc the lip syncing of Sully with John Goodman is just as imperfect. The hair is very impressive but also seems a bit floaty. Criucially, it doesn't really bother us. We can get on and laugh and enjoy the film immensely. The same is true of The Incredibles. Whereas Final Fantasy the movie just seems very odd for reasons that we can't quite place.

Frankenstein

An animator friend of mine told me that the reason for this weirdness is nothing to do with the animation skills or technology problems. He calls it the 'Frankenstein Effect'. It's a psychological problem.

Our brains devote a lot of effort to visual processing, and of that a significant proportion is given over specifically to recognising human faces. We instincively understand faces as patterns pretty much from birth, and we associate feelings with them. Our recognition routines are very sophisticated and are what allow us to read subtle expressions, recognise people from afar, and so on. And we are able to recognise when something's not right about a face. That's how we know when people are lying to us. Wrong faces make us instinctively suspicious.

My friend's explanation is that because we are looking at realistic characters on screen, our minds instinctively move into 'human face recognition' mode, especially for close-ups etc. And because we have a lot of our mind devoted to that specific task, we are hyper aware of imperfections. The more realistic the character becomes, the more we become aware of the imperfections.

That's why The Incredibles doesn't bother us. The characters are exaggerated and their expressions are suggestive rather than simulacrum. Pretty much every successful animated character works on the basis of an exaggeraed aspect of humans. Things like huge eyes, spindly limbs and very wonky teeth etc produce exaggerated expressions, and these we emotionally respond to (Gollum, for example). Caricature artists tend to find that their customers usually recognise exaggerated versions of, say, politicians than realistic versions of same. It would seem that exaggeration and animation go hand in hand, something Disney figured out a long time ago. It gets around the Frankenstein effect. That's why the orcs look more real than the humans in that Dawn of War video.

We instinctively know the difference between art and reality. With a realistic character, our brains are all the time flagging that something is not right, and because we are in a suspicious mode, we aren't really able to emote with the character. It's hard to feel empathy for a liar.

So, the obvious question is, if you want to make game stories involving realistic people, why not do what the film industry does. Why not use actors? The usual first answer is 'transitions'.

Transitions

It was common when playing a game circa 1995-98 to have FMV (full motion video) sequences between gameplay sections of a game. You would be playing a shooting game, for example, and in between each level you'd have someone like Christopher Walken or Mark Hamill or some rent-a-day actor do their bit, followed by a long loading screen, where'd you'd go back to your character who didn't really look a lot like Walken or Hamill or rent-a-day. So it was thought that those were disassociative.

This didn't really stop Square producing Final Fantasy VII with clear differences between the animated cut-scenes and the gameplay sections. Plenty of noticeable transitions, and yet no-one was bothered. So perhaps the problem with FMV was not the mechanic of disassociation, but the material. Back in the FMV days, most game stories were just plain awful. However, the idea that the whole FMV mechanic was bad took hold because of 'transitions'.

EA's The Two Towers shows that transition issues were not that big of an issue any more. Using shots from the film of the same name, the game would then overlay the action with an animated sequence of lower quality using the same moves, and then the player would be in control. It worked very well and made for quite a fun and epic game which captured the spirit of the film. You watch Legolas fighting, you see the change into animated Legolas, and bang, you're in the action without having the time to think about loading screens.

So if transitions can now be done well and actors can be placed inside fully CG backgrounds (Sin City), then why not use actors? Why are Konami continuing to create more and realistic Solid Snakes, at greater cost, with increasing Frankenstein problems, when they could hire a half-decent action star and film their sequences to much better effect? Control.

Control

The second counter-actors argument is the idea of control. I.e. if you can fully control every aspect of the production of cut scenes, from the movement of the camera to the expressions on the characters faces, then you can get exactly what you want. You get the exact emotional impact.

There are two problems with this approach. One is that the Frankenstein effect is still present, no matter how much control you have. The second is that this much control deprives material of spontaneity, and if you want an emotional connection, then spontaneity is vital. For example, both “The Phantom Menace” and “Sky Captain and the World of Tomorrow” are very unemotional films to watch, but the reason why is not clear.

The scripts in both films are not great by any means, but then this never stopped many an 80s action comedy from being basically entertaining. These films should be more entertaining than they actually are, and yet they fall very flat. I'm not the first one to suggest that the reason for this is the controlled way in which they were made. Both films are almost wholly made in blue-screen, with an actor mostly delivering lines to empty air. They were both made in very controlled circumstances, in otherwords, and they lack spontaneity.

Contrast this with Sin City, also shot largely in CGI but, crucially, with a lot of group acting scenes and with some amount of sets. It was also directed in a very informal and specific way, between Rodriguez and Miller, whereas George Lucas's creation was very much “It want this. More intense. More intense. More intense” sort of direction. Sin City gave its actors enough room to breathe, and made for an excellent film as a result. Phantom Menace treated them like dolls, in much the same way as the hypothetical 'total control' argument for game cut scenes. Why would we expect any different results?

I think, secretly, we all know that we wouldn't, but that there is a larger issue that this actor issue is masking, and that is to do with control over a whole project.

Hollywood

It's easy to live in denial and say that things like the Frankenstein effect are just temporary problems, to be overcome by technology. There is no evidence of that. Games remain on the cultural margins, game stories are far less emotionally engaging now than they were in the Lucasarts adventure game days. As the technology develops to make more realistic skin tones, eyes and lips, the situation is getting worse, not better.

A good example is the promo video for Fight Night (I think) for PS3 (I think) which I saw again recently at an EA party. There's a lot of shots of fighters getting hit slowly in the face, and a speaker talking about how this is making the emotional connection more visceral and real etc. When in fact it just looks weird.

Denial is covering the real reason: The deeply held fear, especially on the part of developers, of Hollywood. In short, nobody wants the games industry to turn into the movie industry. Nobody wants movie stars calling the shots.

This is just hugely irrational. Game developers are, apparently, quite a conservative lot. They fear change. Even the indie side of the industry is largely devoted to trying to re-create the past. Game developers really don't like the idea that a Brad Pitt or a Tom Cruise might come in a make all sorts of unreasonable demands, charge a giant fee and then ride off into the sunset.

The answer to this is very simple. If you're terrified of movie stars, don't use them. There are scads of TV actors, stage actors and so on out there who'd kill for a break in a big game and they'll do what you want for reasonable fees.

It's more a reflection of our lack of experience and practicality issues as an industry that we'd think it better to spend 2 million making freakish movie sequences with not-quite humans than to just hire a few actors and facilities for a week and shoot what we need for 250K. It reflects our continuing problem with getting in touch with reality, a problem that is rife across the games industry. It also reflects a deep trust issue, also an industry staple.


The Cost

Costs more than anything else are the convincing reason to start using actors again. They are cheaper and far more 'real' than any CG Solid Snake is ever going to be. What studios need to do is get their heads around the fact that there are better and cheaper ways to do the same things, and that now that we don't really have transition problems any more, what we are doing with all of our money is essentially inefficient and unworkable.

Why go through the heartbreak and failure of trying to recreate reality for the game's story when you could simply hire a writer, a director, an editor, a blue-screen facility and some actors and just work professionally instead. And that, more than anything is why we need the actors back. The current method of realistic game storytelling does not work because the principle element is suspiciously unreal.

It's time to let the past be the past and move on from these destructive denial issues that cost so much and pay off so little.

Particleblog's comments have moved to The Play Room.

Tuesday, August 16, 2005

A Game By...

It is an accepted wisdom in the games industry that franchises are what matter. A franchise, in case you don't know, is a line of games which are held together by a common brand. Usually that brand is either a name or an identifying character, or both. Some great examples are games like the FIFA series, the various games that Nintendo have attached Mario to, the Championship Manager series and so on.

And of course, it works. When people in the industry talk about the need to control and own Intellectual Property (IP) and how the business is an IP-centric business, it's brands and franchises that they're talking about. Licenses are an extension of this, where a brand from another medium (usually a movie) is borrowed to fill the gap, create customer recognition and so on. Customer recognition is key to the whole franchise idea.

However entertainment franchises are only at their most effective in younger markets, most especially the teenage and early twenties market (the "immature market"). The immature are especially open to the kind of unreasoned love that franchises thrive in, possibly because young minds haven't developed critical faculties to the extent that a fully grown adult has (somewhere between the ages of 25 and 30).

As a result, the more widely broadcast a franchise is, the greater the chance of its success, because the core customer, doesn't really know any better. They have disposable income and they are far more likely to herd around what they are told has quality rather than what they determine for themselves has quality. The franchise business therefore works best for larger companies rather than smaller ones because they have deeper pockets.

Pop music is an excellent example of this. As is the wargame market and Games Workshop's near total domination of it in Britain. As is EA's strategy, and that of the comics industry. The strategy of these companies is pretty simple:

1. Create a core brand and branch products off it.
2. Promote the hell out of those products.
3. Don't worry about the long life of those products or the physical quality of them (as opposed to the perceived quality).
4. Realise that your customer will grow out of your product, so don't try to retain them more than a few key years.
5. Instead, focus on recycling the product for a new generation, because there is always a new generation.

The franchise entertainment business is built on the idea that there are always new customers and that it is far more important to capture new customers than to retain old ones. Old customers mature and move on. They always do. 98%* of Games Workshop customers don't play Warhammer after they turn 21. A lot of people grow out of eating fast food by their mid-twenties (or at least of relishing the prospect).

(*educated guess)

The result of the franchise business is that the products themselves stand still. FIFA has been the same game for years. Marvel Comics are never going to let Spiderman permanently die. The Big Mac is immortal. Pop groups may die, but the formula remains intact (and the songs recycle).

So what? You may regard it as evil or bad or whatever, but it's simply a reality. You sell to the young, you do it with recycling unchanging franchises. That's what works.


The problem lies for those of us who don't want to make games for the immature. If you're interested at all in evolving the medium of games, if you're interested in getting the hell away from an eternal cycle of sweaty teenagers and increasingly dreary E3 graphics-athons, tediously 'exciting' console releases and more 10/10 reviews than you can shake a stick at, what do you do? Hell, even if you're just interested in opening up a new market, what do you do?

I believe that the answer is authors.
Or, to put it another way, people.
Or, to put it another way, the phrase "a game by"

You see, as the immature market responds to franchises (and the small amount of adult gamers who can live with them), the mature market sees through them very quickly. As a quick example, how may cynical thirty-something gamers do you know? Mature adults value film directors, they value novelists and they value musicians. They are far more interested in who wrote, directed or composed a piece rather than what its name is. Why?

The mature entertainment consumer is wise enough to know when they're seeing the same thing repeated. They have perspective, and look for genuinely new ideas. They start to search out who or what is behind the brand. When you get to a certain age, you start to look for signs of intelligence out there. You start to look for art and soul, and you come to realise that you find that by following the artist. It's more 'real'.

But who is the artist of a videogame? That is the industry's real problem, as it applies to addressing the mature market. (For the immature market it doesn't really matter).


It used to be the case ten or fifteen years ago, before money became seriously involved, that some development teams were distinctive. As a music band can be identified as having an artistic voice (Pink Floyd, for example), so too you could really derive a sense of individuality about a developer's output. You knew a Sensible game or a Bullfrog game or an id game when you saw it. However, unlike a band, the actual members of the developer often went anonymous bar one or two figures. A band performs on stage. They can be seen. A game development team, on the other hand, is essentially an anonymous company.

This meant that development teams were vulnerable. They could be bought. They could be dismembered. Members of the teams could be fired without the public ever knowing, and figureheads could arise who may or may not have been worth such celebrity. As the game media found out, it was increasingly difficult to pinpoint what the celebrities did exactly, and so
you got a profusion of ludicrous claims, outright lies and increasingly non-specific 'rockstar designers'.

The teams also grew. The bands went from being four guys with guitars and a drum to ten guys with guitars and a drum, to twenty guys with guitars, drums, violins and sound board. Now it's a hundred guys with a variety of instruments, organised into departments with contract arrangements, and management committees. The scenario is well-understood, but the upshot of it is that greater numbers lead to a diffusion of distinctiveness, and everything becomes faceless. Modern developers are essentially mini-corporations. Rather than bands, they have become orchestras.

The real survivors in the indsutry are and continue to be personalities. In Britain alone you have Peter Molyneux, Archer Maclean, David Braben, Jeff Minter. In the US you have Will Wright, Warren Spector, John Carmack etc. In Japan you have Miyamoto, Kojima, Suzuki etc. What these people have managed to do is become individuals who are recognised. They are the ones who have been able, as a result, to pull teams together, to collect long-term fans, to give consistent interviews over a number of years and, essentially, to build public trust around them as, for the want of a better word, authors. Whether they count as conductors in the modern world is another issue, but the key point here is that they have a durable presence and that is worth far more to their respective companies than just an IP.

It's very easy to think that a game designer is the natural person to turn to as the 'artist' of the game, but I think that this is misleading unless that person is also the one that drives the project to completion. Often, game designers do no such thing. They do a lot of document writing, they do a lot of preparation and figuring-out, but often it stops at the documents. They are often involved also in the testing and implementation supervision, but at an equal level with folks from other departments. Game designers, like scriptwriters, are often imagining and designing based on the briefs of others, and they usually do not have overall control. Many of them lack the personal skills necessary to lead a team anyway.

This is where the notion of a game director comes in. This is seen more in Japan than in the West, at the moment, as western studios are often as not wary of putting someone creative in charge of everything. I think this has much to do with a fear of egotism, a lack of respect for ideas, and an assumption that a director is essentially just going to be some bossy fellow who teaches everyone how to suck eggs. Not so.

A game director, like a film director or a stage director or a chief archictect, directs people. It's cheap and easy to say "they look after the vision" like some monk in a cell, but in reality what this means is mastering an understanding of all the disciplines involved in a modern game, understanding how the game is going to be put together, and then figuring out what to get people to do and how to do it. It's an incredibly responsible and time-consuming position, and an absolutely vital one.

Someone who sits in their office all day long and holds the odd meeting with team leads to see how things are going is not a game director. Someone who is up and about with the team all the time, always watching what is being done, always keeping people focussed, directing teams as to what he wants to see in detail and providing feedback and real decisions is a game director. It's a very demanding job, but an absolutely vital one.

In film, direction is everything. You need a great script, but a script is just words on a page without someone to bring it to life. In games, you need a great design, but you need someone to steward it into a brilliant prototype, to bring that through pre-production and production, and to be involved every step of the way. Not a producer, a director.

The director is the artist of the videogame. We should have no shame in pushing this, as by doing so, we push our creativity out there. The old argument goes that the game development process is reliant on so many people that it is unfair to single out one person as the creator, but this is both unrealistic and a straw man. You never hear who the editor of the Harry Potter books is, you never find out who the studio producers of Sting's latest album are. You never really pay attention to million-and-one people that go into making a Spielberg film happen. You don't know how many people make John Rocca's outfits.

Yet in all these media we have no problem identifying the creative force. In games, we do, but purely because of old habits which died hard. This used to be a medium for bands. It isn't any more. It's time to recognise that.


There is no issue with the continuation of the immature franchises. They are what keep the industry going. But if the medium is to broaden both commercially and creatively, it's going to take some vision on the part of a publisher or a developer to realise that the mature market is different. If you want find a new business angle, then you have to innovate in business.

The comics industry moguls did not realise this and as a result its franchises were the very thing that marginalised them. Games Workshop nows exists largely as a marginalised hobby and company, where they turn a profit but they have run out of avenues for true growth.

The same could very easily happen to video games unless a second front is opened. Keep the franchises coming boys, but we need to be looking outward as well. Artists, unlike franchises, have the capacity for change. They have the capacity for real reinvention. Artists have the capacity to generate real penetration in a way that no franchise can. You could never see the Master Chief on Letterman, but you could see a game director there. A game director has the possibility of turning a customer for life. No franchise can ever hope to do that.


A Post By Tadhg Kelly

Particleblog's comments have moved to The Play Room.

Sunday, July 24, 2005

The Player's Journey

In a few places around the internet, I've been reasonably vocal on the subject of The Hero's Journey, of the idea of bringing myth into games, of the ideas of interactive storytelling through videogames, and so on. (Have a look at Scott Miller's blog and a couple of vociferous letters to gamasutra if you want to know what I'm talking about). I thought I'd compile my general thoughts on why this sort of thing bugs me quite a lot.

My main beef with the whole 'interactive narrative' idea goes back far further than videogames, back as far as 1990 and the world of tabletop roleplaying games. There is a school of thought in rpg circles (most visibly used by the company White Wolf, publishers of Vampire etc) that roleplaying games are actually the great inheritors of the tradition of oral storytelling. The idea goes that the game elements of numbers and dice etc are basically facilitators, ground rules if you will, for a shared imaginative background, and that what is really going on in an rpg is a recapturing of the mythic experience. Rather than hearing the story of Beowulf, you are becoming Beowulf, and in playing that character with your friends against the conflict-laden dramatic world that the games-master has laid out, you have an interactive narrative.

In part, I agree with this, but the effect is wholly dependent on the players themselves. Poor players, or uninterested players, will quickly reduce such high ambition to mud. Some players like to put on voices and act the part, but others like to be themselves. It is from roleplaying games, most importantly, that the idea of player-character has emerged.

Fast forward a decade and a half to today.

There is a lot of talk and experimentation in the world of videogames with the same idea, essentially, as that from 15 years ago in storytelling roleplaying games. The basic gist of the idea goes that rather than having a GM moderating and crafting the dramatic world, the computer does that in a systemic way. The player plays either by himself or in groups (via the internet or whatever), and so 'interactive drama' is born. Push this forward another step and you get into the realm of understanding videogames as an extension of the narrative idea (all games are stories, or self-instantiating stories, or whatever) and thereby creating a sort of association with that same oral (and visual, i.e. film) tradition.

Frankly, I think this interpretation is simply dead wrong.

And the reason that I think that it is wrong is not simply a "we don't have the technology yet" sense of wrongness. It's a fundamental misunderstanding of the difference in the relationship between player and videogame versus player and roleplaying game. And it also completely shortchanges the importance of the GM in the process, while misrepresenting the idea of 'story' into the bargain. And hence, I think applying the Hero's Journey as a guiding principle of design is 100% wrong.


Player and Character

It is assumed in the narrative model of videogames that the relationship between the player and the character is the same as that in the roleplaying game. I believe that this is incorrect, and that in fact the relationship is quite the reverse.

In roleplaying games, players often take on the role of their character as best they can. Why?
1. They have usually created said character from the ground up
2. The game is set in an imagined world that no-one can see, so such elements help to add vibrancy and depth where none would exist otherwise
3. At the guidance of the GM, who among other things may dole out experience rewards for more dramatic play, and on a more general level may encourage it through play-acting and voicing himself

Within the limited scope of a player's acting ability and the tone of the game in general (some roleplaying games are just dungeon quests and numbers-hauls after all), the player becomes the character.

It is assumed in narrativist-thinking that players in videogames are doing the same thing. I don't believe that this is true. Why?

1. In a videogame, the capabilities are inherent and obvious, and we take on the perceptions based on where our points of contact are, yet
2. In a videogame, we are not engaging our imaginations. We are able to see the world that the game provides, whether abstract or realistic, and we see it fully in real time.
3. In a videogame, our actions are direct and instinctive, requiring no interpretation or spin.

As such, what we have in videogames is a sense of a constant window on a situation (rather than the dramatic one provided by a GM, or a film director, or an oral storyteller), an unconscious relationship with capability through static control points (which can vary from a humanoid character to a set of falling blocks), and what we are doing when playing a videogame is that we are using a character as an extension of ourselves, not a replacement. In videogames, the character becomes the player.

This is borne out through observation.
Attending many roleplaying game conventions in my youth, it was easy to see players pretending to be characters, with voices and mannerisms befitting. On the other hand, I have never seen anyone actually roleplaying in a MMOG. MMOG players enjoy playing the game, but they invariably play as themselves (watch the Leeroy video as an example). I have never heard anyone talking about an FPS where they say 'And then Master Chief did this'. They always talk about what they themselves did. Nobody roleplays the battlefield of Battlefield 2. They just get on with the shooting.

Does this mean that I think that 'story' is impossible in games? No.
Well, not exactly, but not quite. Before I can dig further into that, I need to talk a bit about story, narrative and drama itself, because I think they're very confusing.


The Story with Story

If games are capable of so-called storytelling, then what is a story?

Well we can start with differentiating a story with an account. An account is simply this:

"Today I woke up at 10:00 am because light was streaming in the window. I didn't really want to get up, but I did. I went downstairs and I met Simon and his two children. Simon told me that that they were getting ready for a barbecue. I went into the living room and watched a bit of TV for a few minutes, and then I went back into the kitchen and got some breakfast. I had Cranberry Wheats, they were quite tasty. Afterward I sat back down and watched some more television..."

And so on. An account is simply a retelling of things that happened. I may have made up that account, it may be true, that doesn't matter. What matters is that it is simply straightforward. This happened, then this happened, then this happened, etc.

Story, on the other hand, is to take this account and make it dramatically interesting. How? With structure. A storyteller structures an account in order to give it resonance, to drop the boring bits, to create mystery and drama, to ficitionalise parts of the account that need to be fictionalised, and to chop and change focus. A good storyteller telling the story of my day might choose (depending on the feelings he wants to evoke, the mysteries he wants to conjure and the pace that he wants to convey) to start from the middle and work back ("Tadhg sat typing at the desk in Simon's house, but Simon was nowhere to be seen"), or from the beginning but work forward interestingly ("Cranberries", thought Tadhg, "Always with the fuckin cranberries") etc.

The key factor to recognise here is that the storyteller structures the complete story. He places the beginning where it is so that the end will transpire in a certain way with certain emotions. This is where the Hero's Quest is quite relevant as one means to reflect that. The beginning and the end and the middle and all the details in between are all part of a web of structure, and when the structure works, the story works. The hero has a quest that we can follow because the structure lends itself to that, and so it goes. My account from above therefore can turn into anything from a story about a man musing on whether he should have broken up with his girlfriend while eating breakfast, to a piece about malaise of western civilisation as told through the archetypal boring Sunday afternoon.

And that's the narrative experience.
Story = structure.

This is where roleplaying games diverge from videogames, because roleplaying games have a dramatic arbiter in the shape of a GM who can pull the story back on track and skip the crap. Whereas videogames are entirely account-based. In a roleplaying game, the GM can choose on the fly to skip encounters, add them in, give characters lines and information, and basically mix it up. GMs dynamically arbitrate roleplaying games to keep a dramatic structure. It doesn't always work, but there is a knack to it so that it does come out as a consensual story experience.

A videogame system is unable to do that in keeping with drama. It may vary the hit points that a player has to make combat easier or harder (like Max Payne does, apparently), but it's unable to judge, based on the boredom versus pace scale, whether to drop bits of a level, add new ones, or otherwise craft drama. Storytelling is an art and a craft, and a videogame is unable to do either.

Videogames are real-time experiences. They are not self-instantiating stories. They are self-instantiating accounts.



Open Windows

This does not mean, however, that the account of a videogame cannot be interesting. What it means is that we cannot rely on the tools of storytelling in its traditional (or meta-story) form for any answers, because the whole of storytelling theory and idea are based on identification with heroes and the structured narrative. Videogames subvert both. What videogames do allow for, however, is the crafting of situations.

The typical single player videogame pattern runs as follows. A piece of information sets up a gameplay challenge for the player. The player plays through that challenge, and then the game progresses on to the next challenge after informing the player that he has completed the current one. Challenges become progressively more involved or difficult or both. This I choose to call The Player's Journey.

In this context, what happens to the world of the game?

The answer is that it changes. It may get faster, for example, or the landscape may change. The obstacles may change. The player may find themselves in a completely different level. The player's team may be becoming injured or switch sides on the pitch. The videogame represents a constantly changing and challenging situation.

It is impossible to build emotional connection into a game and establish the hero's role etc etc because the player does not become the hero. Every player plays Max Payne according to his or her own personality (the character becomes the player, remember), so trying to make the player 'feel' a certain way on a constant basis is pointless. What the game can do, however, is set up the situation.

In doing this, the pieces of information that the game uses to set up the challenges is what's important. This information can be as simple as 'Now try Level 3'. Or it can be as complicated as the opening cut scene to Max Payne 2. What mustn't be forgotten is that the player will play according to the player's whims (My Max is different to yours) which runs counter to dramatic structure. But what doesn't change is the situation.

I play Max Payne one way, you play Max Payne another, but the precinct lieutenant is the same in both of our experiences. By crafting the situation appropriately, we can create an experience within the videogame that informs and underscores the gameplay (if we so choose it to) by realising that we're not creating a hero journey. We're creating a player journey. The player has a window onto a world. What we can do is create the world, and that world is what makes a game potentially artistic, authorial and interesting.



And that's where I think attempts to co-opt the Hero's Journey are doomed to failure, because the Hero's Journey is based entirely on the structure and art that a storyteller provides. With videogames what we are creating is a dynamic structure, what I like to call a Living Architecture. We can create interesting life, depict situations, beliefs and morals within it if we choose, but we can only rely on the player to be the player, not some idealised hero figure that fits into our narrative scope. Videogames are fundamentally not a hero-driven narrative art, they are a player-driven situational one.

Let's use that to our advantage, not fight against it.

Particleblog's comments have moved to The Play Room.