This post originated from an RSS feed registered with Ruby Buzz
by Michael Granger.
Original Post: Modeling Perception in a Multiplayer Text Game
Feed Title: devEiate
Feed URL: http://deveiate.org/feed/rss/blog
Feed Description: A blog about Ruby, codecraft, testing, linguistics, and stuff. Mostly stuff.
I’ve been working on the Perception system for the FærieMUD project today, trying to come up with a basic set of functionality that is adequate for the “just get something working” mode that is my current goal for the game, but that allows room for the full-blown real thing when I (or someone else) get time to implement it.
In order to do that, I need to specify how the full thing will look, at least in the abstract, so I don’t shoot myself in the foot now. A bit of refactoring to allow for new functionality in the future is okay, but my time spent on the project now is so limited that every stitch of code represents a precious amount of time, and I’m loathe to squander it. Of course, I also have to avoid my old pattern of not coding at all because I get lost in design…
So here’s what I’ve come up with so far:
Sketch of Specification
There are two modes of perception: active and passive. Passive perception happens when activity or change happens around your character, and perception events are propagated to him via his environment. Active perception happens as the result of volition, e.g., you type look at mushroom and a Verb fires off an event.
I’m thinking currently that the only difference between the implementation of active and passive perception will be in how the event/s get generated, so I can factor that bit out and just assume that the input to observation is a collection of PerceptionEvents.
When events enter the PerceptionObject, they’ll go through a couple of stages that will tailor the description to the character before it gets transformed to text:
Vetting - events will be discarded based on the character’s state. Things that can cause events to be discarded:
Disability - If the character is blind, visual events won’t be described. Or perhaps they will be if the character has been blind all his life…
Disinterest - the character’s interests don’t match the event.
Level of Detail - The incoming events will be affected by the volume of things happening. Events will be more observable when there are only a few as opposed to many.
Concentration - Observation is affected by mental fatigue, so a decrease in a character’s Concentration will decrease the number of events as well as the detail at which they are rendered.
Others - This list should be dynamically-configurable, so perhaps something like the Bucket Brigade pattern will be appropriate.
Contextualization - events should be organized into groups of contextually-related objects so that they appear together. They should also be expressed in terms of the object which the character has in focus.
Distortion - The character’s view on events should be tailored to her own view of the world. Several factors can distort or modify the events the character perceives:
Culture - The culture the character is from will instill in her a set of prejudices, assumptions, metaphors, and other such devices that reflect its makeup and history.
Relationships - The relationships the character has with others, with her culture, and with the objects or beings involved in the action can alter her perception as well.
Stories - The stories the character has gathered, either through her own experiences or by hearing them told by others can color how she sees events. A grail-obsessed knight will see grail imagery everywhere, whereas a character who has been attacked repeatedly by dwarves will tend to see them as aggressors, no matter their true intentions.
Once the events are filtered through this matrix, they need to be described in text. This will require generating descriptive sentences in a narrative format that reflect the incoming events in terms of the object graph. While this in itself is a somewhat monumental task, we can at first just output simple descriptions and elaborate as we discover or invent appropriate techniques for our NLG systems. We already have a fairly good underpinning of technology for basic narrative generation, and I’ve gathered a large body of NLG research papers and examples that should help with the elaboration when there’s time.
For now, we’ll use something like:
Coalesce events into subject-predicate tuples.
Group tuples by connected or near objects.
Groups turn into paragraphs, and tuples into sentences.
That’s quite a lot of work, but I think it’ll go a long way towards making things “playable” (for a sufficiently-lenient definition of the word).