Reading: Wizard of Oz Support throughout an Iterative Design Process

One of the difficulties of Wizard of Oz prototyping is that each scenario needs to be carefully designed, and how the wizard will respond carefully orchestrated. The time spent doing this prevents using Wizard of Oz prototyping iteratively. Each iteration is just too expensive.

The reading is about a system that attempts to reduce these costs. Georgia Institute of Technology developed a system, DART, that provides a set of tools for testing augmented reality applications. It’s a publish/subscription system that relies on reacting to events. For Wizard of Oz prototyping, the system can publish events such as user proximity or situation, forward it to the wizard, who can respond appropriately. Responses are tied to specific events, reducing the time needed to search for an appropriate response to an event.

The reading has a case study that uses the DART system to guide users on a tour of the Oakland cemetery in Atlanta. As users visit certain locations, information is provided to the user about what they are seeing. If they want more information the system lets them delve deeper. Of course, the actual system isn’t built for these tests. Instead, the ‘computer’ responses are given by the wizard. For the first test, the wizard controlled the entire system. For the second and third tests, the wizard controlled fewer and fewer elements of the system.

Reading: Wizard of Oz Support throughout an Iterative Design Process

Homework: Storyboard

Context

This set of storyboards told a story of how the Doctor uses the homing feature in his watch to find certain prearranged objects like the TARDIS. I focused on the 2D display that shows on the inside of the watch lid. The 2D display is mainly used while the Doctor is moving, rather than the 3D display, which is more informative but is difficult to use while walking around. A quick glance at the lid of the watch shows the Doctor whether or not he’s heading in the right direction.

Version 1 to Version 2

Version 1 was done just using paper and pencil. Version 2, I did primarily in Blender, and assembled the whole storyboard in PowerPoint. The paper sketches I did were very unsatisfying to me, mainly because of terrible handwriting and bad drawing.

One problem I had with Version 1 is that there was no context for the story. Why was the Doctor unconscious? Where was he? What were the connections between frames? Version 2 has twice as many frames, and tries to fill in a back story and conclusion without getting too mired in all the details.

Reflection on Storyboarding

Initially I had started out version 1 using StoryboardThat, but while its poses are adequate, I found that the characters were too limited for the Doctor Who story. The biggest problem was having to make custom graphics and then uploading them. If I’ve got to make the graphics anyway, why not just use the graphics tool that’s much more powerful, and skip the extra step?

Version 1 ended up being a simple pencil drawing. Despite the low expectations for artistic quality, I still have a mental baseline of what’s acceptable. It lies somewhere between drawings that are intelligible, and drawings that despite being rough, are stylish. Unfortunately, my own work seemed to not meet the intelligible requirement. My handwriting, in particular, is terrible. Fortunately, computers and their clear, stylish, and consistent text have allowed me to survive in this world.

Blender was still rather time-consuming. Fortunately, there are several websites dedicated to Doctor Who themed meshes with free downloads. I supplemented these with more general-purpose meshes from other websites. These free assets that I could simply copy and move around in various scenes made this whole approach feasible. Without them, far too much time would have been spent simply modeling characters. I only ended up making a mesh for the watch itself, which was not too difficult.

For future development, I think more interaction with the watch would be helpful, especially if I’m trying to work out what the watch does, and how it’s used.

Other Use

I think the sequential type of storyboarding, especially using a GUI could be useful at work. It seems almost like an extension of a wireframe or other paper prototype, but showing various states in sequence. It adds a time dimension to the whole prototype.

I’m wondering if somehow having the prototype in digital form, rather than paper form would help speed up the process. If a frame can simply be mostly copied over to a new frame with slight modifications, the process could be quite fast. On the other hand, we do have photocopiers at work.

Homework: Storyboard

Inspiration: Designing for Pleasure

For this week’s inspiration, I read the “Designing for Pleasure” chapter of Alan Cooper’s book The Inmates are Running the Asylum. The section on Personas was most interesting to me. I kind of knew what personas are before this reading, but didn’t quite see the point of them. It turns out that they prevent the ‘elastic user’ phenomenon.

When designing software without personas, it’s very easy to say that the users want something different every day. One day, it’s the power user that needs some fancy feature, another day, it’s the non-proficient user that needs a simplified interface. Personas force the design team to focus on one primary user, and possibly one or two secondary users. When a feature is suggested, it can be compared against the goals of the primary user.

Inspiration: Designing for Pleasure

Reading: Storyboard Images

This week’s reading focused on how to save time and effort by cutting and pasting tracings of existing or new images. Traced images can be especially helpful for those of us who need to produce decent looking storyboards, but aren’t very good at drawing.

The first technique the reading talks about is taking an existing image – in this example, a cell phone – and tracing it on a computer using a drawing tablet or mouse. The newly-created layer can then be the background of all storyboard images that contain a view of the phone. Each ‘screen’ of the interface can be drawn in a different layer, and selectively made visible, as needed for storyboard frames. Alternatively, regular old tracing paper can be used to make tracings of the image, and then photocopied.

Similarly, existing web pages may be used by taking a screenshot of the page, and then whiting out the existing content, replacing it with new content on the computer or on paper.

Taking the tracing technique further, gestures can be indicated by taking photos of your hands in the desired gesture positions, and tracing the photos. These gestures can be copied and pasted as needed. Generic grasping and manipulating gestures can be combined with other traced objects to create entirely new scenes.

If tracings are ultimately drawn in black, light-gray arrows can be used to indicate movement and manipulation.

Sometimes, an existing image of the scene being depicted can be clarified by highlighting parts of the image that need to be emphasized. For example, a person or a person holding an object to be manipulated can be traced and filled in with white to emphasize their actions.

Furthermore, existing images may be annotated as if they were augmented reality scenes.

I was impressed with the ability to create clear storyboard images of various scenes without the need to draw everything freehand. Hands and faces seem to be the hardest thing for me to draw, and I can definitely see how tracing reference images could help.

 

 

 

Reading: Storyboard Images

Homework 7: Mobile-Focused Tool

Context

For this round of prototyping I chose to focus on the look and feel of the application. For past iterations, I had been bothered by the inability to animate the interface in the way it would appear to the Doctor. Everything was very static, despite the ability to navigate through the screens. I tried to give all the interface elements a shiny gold metallic look that evokes decorative elements of a gold watch.

The homing interface mainly on showing a path to prearranged objects that the Doctor had prior access to. Each item has its own button that, when touched, shows a path to the desired person or thing. The view of the map starts as a bird’s eye view, and when a button is touched, the viewpoint swoops down to ground level and points at the start of the path. Paths of each type have their own color to indicate which object it’s leading to.

The alarm interface has three different settings: time, location, and a message to indicate what the alarm is for. The time screen has sliding rows of time units. The number or name selected is in the somewhat isolated center of the row. The location screen shows a map interface that allows the Doctor to choose a location for the alarm. The message screen offers the ability to either type a message by touching individual letter buttons, or to touch a microphone icon and speak the message. As words are recognized the microphone pulses a bit.

Version 1 to Version 2

Since I was never a professional animator, and what skills I had with Blender were rusty, I focused on just getting basic meshes and animations into Blender.

The screens for alarm version 1 have moving time rows, and the typing effects. The map is simply static. In version 2, I discovered that Marvel allows different gestures on its hotspots. I set the hotspots to use pinch and spread gestures to simulate zooming in and out of maps. Each image was created using Blender to animate the zooms.

The homing screen in version 1 was just a borrowed image from old prototypes imported into Blender, and given a very basic animation. Mainly, I was just seeing if I could solve the problem of animating the rotation of the map. It turns out I should have animated the camera instead of the map plane. For version 2, I figured out how to tie the buttons to the camera in the scene graph, so buttons are now present. When the TARDIS button is touched, the path appears, and the camera flies in behind the Doctor and faces the start of the path.

Reflection On Using Marvel and Blender

I had originally started this project using InVision, but it only allows images up to 10MB in size to be uploaded. My animations are animated GIFs made from very short movies produced in Blender. Some of the images are several times larger than that. Marvel had no problem with large GIFs at all.

I really love working with Blender, and it’s about as flexible an animation tool as one can find. For situations where very specific behaviors need to be modeled, like the ones I had in this project, it did the job very well. Another benefit of using Blender is that premade meshes of various useful objects can easily be found so that you don’t have to go to the trouble of making them yourself. For example, searching for “Doctor Who Blender”, I was able to find a rather nice mesh of the Doctor to use in the prototypes. The one problem with Blender is that it’s a tool made or detail, not speed. I sunk a lot of time into looking up how to create animations, and tweaking the prototypes. Having gone through this process, I would hesitate to use Blender for anything but the most specific scenes that couldn’t be communicated any other way.

Other Situations and Projects

One use I could see for Blender is to create prototypes of 3D spaces mostly composed of easily found meshes. I found slides from a talk on this subject where the speaker was showing how to use blender for room layouts. These rooms could be scripted to give a walkthrough of an office, or a prototype for a video game.

Of the image-based prototyping tools, I have so far liked Marvel the best, and even paid for a subscription for a few months. It’s flexible, offers touch gestures, and doesn’t complain about my ridiculously huge images. Another cool feature is that there are Android and iPhone apps that let you test out your prototypes right on the phone. InVision had a neat feature where you could text a link to a phone, but it choked on my prototypes. The Marvel app seems to be more reliable.

Homework 7: Mobile-Focused Tool

Reading: Storyboards

Sequential Storyboards

These are a series of images that communicate a sequence of events in a situation. Much like written stories, the images have a setup, set of related events, climax, and resolution.

Each frame in the story board acts like a keyframe in an animated movie, but the frames in between are left out since they represent user activity. How many keyframes to include in the storyboard depends on how much detail can be figured out by the viewer. A long form doesn’t need a frame for filling out every field, but if certain settings aren’t obvious, it’s good to break them out into separate images.

Annotations

Actions depicted in the images can and should be annotated to explain what is going on in more detail. Short notes like “user enters their credit card number and expiration date and clicks submit” can clarify what is going on in the image if it’s not completely obvious.

Images can be enhanced with additional visual information. Arrows can indicate something like the swiveling of a person’s head when they notice something, or motion blur can be used to indicate fast movement.

Reading: Storyboards

Inspiration: Marvel

Marvel, much like InVision, is an on line prototyping tool designed to be used by small teams. Team members may upload images, made in 3rd party software, representing screens in the application. Screens can be given hotspots that represent navigations or different screen states. Marvel also has a “Canvas” that provides basic drawing capabilities. The drawing system is powerful enough to draw blocks of colors and place text. Multiple images can be loaded to the canvas to create composites.

Users with a Marvel account can view the prototypes online, or use the Marvel App on either iPhone or Android phones for a more realistic mobile experience.

I had started using InVision for the first round of prototypes for the Medium-Fidelity Mobile project, but ran into problems when I tried to upload images larger than 10MB. I managed to shrink the images I was using enough to upload, and it seemed to work, but the smartphone version didn’t work at all. A few days later, the images wouldn’t appear even on the desktop. Marvel, on the other hand, was not at all squeamish about large images, and has seemed to be more reliable. The mobile app for Marvel doesn’t seem to accept my GIF animations, but I think I can live with that for now, so long as the desktop version does animate.

Inspiration: Marvel