Inspiration: Blender Sequencer

I wrote about using Blender a few weeks ago for making images of storyboards, but I was only dimly aware of its video sequencer. I thought I’d try it out for video editing and see if it was much more difficult and involved than using PowerPoint to make the final video for class.

I found this video series on YouTube that walks you through the process, starting with very basic editing, and builds you up to more advanced techniques.

For the storyboards, you simply drag images and audio files to the sequence editor where they appear as strips. Cross fade effects can be made by selecting two overlapping strips and adding an effect strip from the Add menu. More advanced effects are possible, yet relatively easy to use. While PowerPoint was quite easy to develop a video, there is no fast easy way to rearrange the images and audio without re-recording the slide show.

Advertisements
Inspiration: Blender Sequencer

Inspiration: d.tools

d.tools is system designed to build, test, and analyze interactive physical prototypes. The original system, developed in 2006, consisted of a desktop design application, and a set of commonly used circuit boards, sensors, and a small LCD screen. Later on in the system’s development, other common hardware prototyping platforms, such as Arduino, are supported.

In the design phase, the software worked closely with the circuit boards to design interactions based on button presses, sensors, and touch screen interactions. Plugging a sensor into the main circuit board makes it appear as a tool in the desktop application. Live data feeds from a sensor can be used to easily set threshold values for actions to take place based on sensor input. They system is designed to use as little hand-written code as possible, but offers the ability to add Java classes to the system for custom interactions.

In the test phase, the desktop application records video and audio through an external camera, and automatically records the use of each interaction on a timeline.

In the analysis phase, each state transition is paired with its video segment. Designers can quickly access video clips from the tests that are paired with the interactions driving them. State transition charts are automatically generated by the tests, and designers can see what states were achieved, and how the user arrived at that point.

Screen Shot 2016-04-20 at 3.54.09 PMState diagrams generated by a d.tools prototype.

Screen Shot 2016-04-20 at 3.54.28 PM.pngAnalyzing multiple interactions at once.

Video: http://hci.stanford.edu/publications/2006/dtools/dtools-uist06.mov

Paper: http://hci.stanford.edu/publications/2006/dtools/dtools-uist06.pdf

Inspiration: d.tools

Reading: Sketching with Foam Core

This week’s reading detailed methods of making foam core prototypes of mobile devices. Specifically, smart watches and phones. These devices tend to have many, many screens with multiple functions.

Foam Core can be layered up to the thickness of the device. The ‘screen’ can be cut out of the top layer, and individual hand drawn or digitally drawn screens can be slipped in between the layers. Test users can interact with the screens, and as choices are made, new screens can be slipped in.

One big advantage of this technique when paired with hand drawn screens is the ability to collaboratively make up new screens on the fly. Simply draw another screen.

Another advantage of this prototyping method is that the screens are drawn to scale. This gives testers a very good sense of how users would see and interact with the screens.

Reading: Sketching with Foam Core

Reading: Wizard of Oz Support throughout an Iterative Design Process

One of the difficulties of Wizard of Oz prototyping is that each scenario needs to be carefully designed, and how the wizard will respond carefully orchestrated. The time spent doing this prevents using Wizard of Oz prototyping iteratively. Each iteration is just too expensive.

The reading is about a system that attempts to reduce these costs. Georgia Institute of Technology developed a system, DART, that provides a set of tools for testing augmented reality applications. It’s a publish/subscription system that relies on reacting to events. For Wizard of Oz prototyping, the system can publish events such as user proximity or situation, forward it to the wizard, who can respond appropriately. Responses are tied to specific events, reducing the time needed to search for an appropriate response to an event.

The reading has a case study that uses the DART system to guide users on a tour of the Oakland cemetery in Atlanta. As users visit certain locations, information is provided to the user about what they are seeing. If they want more information the system lets them delve deeper. Of course, the actual system isn’t built for these tests. Instead, the ‘computer’ responses are given by the wizard. For the first test, the wizard controlled the entire system. For the second and third tests, the wizard controlled fewer and fewer elements of the system.

Reading: Wizard of Oz Support throughout an Iterative Design Process

Homework: Storyboard

Context

This set of storyboards told a story of how the Doctor uses the homing feature in his watch to find certain prearranged objects like the TARDIS. I focused on the 2D display that shows on the inside of the watch lid. The 2D display is mainly used while the Doctor is moving, rather than the 3D display, which is more informative but is difficult to use while walking around. A quick glance at the lid of the watch shows the Doctor whether or not he’s heading in the right direction.

Version 1 to Version 2

Version 1 was done just using paper and pencil. Version 2, I did primarily in Blender, and assembled the whole storyboard in PowerPoint. The paper sketches I did were very unsatisfying to me, mainly because of terrible handwriting and bad drawing.

One problem I had with Version 1 is that there was no context for the story. Why was the Doctor unconscious? Where was he? What were the connections between frames? Version 2 has twice as many frames, and tries to fill in a back story and conclusion without getting too mired in all the details.

Reflection on Storyboarding

Initially I had started out version 1 using StoryboardThat, but while its poses are adequate, I found that the characters were too limited for the Doctor Who story. The biggest problem was having to make custom graphics and then uploading them. If I’ve got to make the graphics anyway, why not just use the graphics tool that’s much more powerful, and skip the extra step?

Version 1 ended up being a simple pencil drawing. Despite the low expectations for artistic quality, I still have a mental baseline of what’s acceptable. It lies somewhere between drawings that are intelligible, and drawings that despite being rough, are stylish. Unfortunately, my own work seemed to not meet the intelligible requirement. My handwriting, in particular, is terrible. Fortunately, computers and their clear, stylish, and consistent text have allowed me to survive in this world.

Blender was still rather time-consuming. Fortunately, there are several websites dedicated to Doctor Who themed meshes with free downloads. I supplemented these with more general-purpose meshes from other websites. These free assets that I could simply copy and move around in various scenes made this whole approach feasible. Without them, far too much time would have been spent simply modeling characters. I only ended up making a mesh for the watch itself, which was not too difficult.

For future development, I think more interaction with the watch would be helpful, especially if I’m trying to work out what the watch does, and how it’s used.

Other Use

I think the sequential type of storyboarding, especially using a GUI could be useful at work. It seems almost like an extension of a wireframe or other paper prototype, but showing various states in sequence. It adds a time dimension to the whole prototype.

I’m wondering if somehow having the prototype in digital form, rather than paper form would help speed up the process. If a frame can simply be mostly copied over to a new frame with slight modifications, the process could be quite fast. On the other hand, we do have photocopiers at work.

Homework: Storyboard

Inspiration: Designing for Pleasure

For this week’s inspiration, I read the “Designing for Pleasure” chapter of Alan Cooper’s book The Inmates are Running the Asylum. The section on Personas was most interesting to me. I kind of knew what personas are before this reading, but didn’t quite see the point of them. It turns out that they prevent the ‘elastic user’ phenomenon.

When designing software without personas, it’s very easy to say that the users want something different every day. One day, it’s the power user that needs some fancy feature, another day, it’s the non-proficient user that needs a simplified interface. Personas force the design team to focus on one primary user, and possibly one or two secondary users. When a feature is suggested, it can be compared against the goals of the primary user.

Inspiration: Designing for Pleasure

Reading: Storyboard Images

This week’s reading focused on how to save time and effort by cutting and pasting tracings of existing or new images. Traced images can be especially helpful for those of us who need to produce decent looking storyboards, but aren’t very good at drawing.

The first technique the reading talks about is taking an existing image – in this example, a cell phone – and tracing it on a computer using a drawing tablet or mouse. The newly-created layer can then be the background of all storyboard images that contain a view of the phone. Each ‘screen’ of the interface can be drawn in a different layer, and selectively made visible, as needed for storyboard frames. Alternatively, regular old tracing paper can be used to make tracings of the image, and then photocopied.

Similarly, existing web pages may be used by taking a screenshot of the page, and then whiting out the existing content, replacing it with new content on the computer or on paper.

Taking the tracing technique further, gestures can be indicated by taking photos of your hands in the desired gesture positions, and tracing the photos. These gestures can be copied and pasted as needed. Generic grasping and manipulating gestures can be combined with other traced objects to create entirely new scenes.

If tracings are ultimately drawn in black, light-gray arrows can be used to indicate movement and manipulation.

Sometimes, an existing image of the scene being depicted can be clarified by highlighting parts of the image that need to be emphasized. For example, a person or a person holding an object to be manipulated can be traced and filled in with white to emphasize their actions.

Furthermore, existing images may be annotated as if they were augmented reality scenes.

I was impressed with the ability to create clear storyboard images of various scenes without the need to draw everything freehand. Hands and faces seem to be the hardest thing for me to draw, and I can definitely see how tracing reference images could help.

 

 

 

Reading: Storyboard Images