Homework 7: Mobile-Focused Tool

Context

For this round of prototyping I chose to focus on the look and feel of the application. For past iterations, I had been bothered by the inability to animate the interface in the way it would appear to the Doctor. Everything was very static, despite the ability to navigate through the screens. I tried to give all the interface elements a shiny gold metallic look that evokes decorative elements of a gold watch.

The homing interface mainly on showing a path to prearranged objects that the Doctor had prior access to. Each item has its own button that, when touched, shows a path to the desired person or thing. The view of the map starts as a bird’s eye view, and when a button is touched, the viewpoint swoops down to ground level and points at the start of the path. Paths of each type have their own color to indicate which object it’s leading to.

The alarm interface has three different settings: time, location, and a message to indicate what the alarm is for. The time screen has sliding rows of time units. The number or name selected is in the somewhat isolated center of the row. The location screen shows a map interface that allows the Doctor to choose a location for the alarm. The message screen offers the ability to either type a message by touching individual letter buttons, or to touch a microphone icon and speak the message. As words are recognized the microphone pulses a bit.

Version 1 to Version 2

Since I was never a professional animator, and what skills I had with Blender were rusty, I focused on just getting basic meshes and animations into Blender.

The screens for alarm version 1 have moving time rows, and the typing effects. The map is simply static. In version 2, I discovered that Marvel allows different gestures on its hotspots. I set the hotspots to use pinch and spread gestures to simulate zooming in and out of maps. Each image was created using Blender to animate the zooms.

The homing screen in version 1 was just a borrowed image from old prototypes imported into Blender, and given a very basic animation. Mainly, I was just seeing if I could solve the problem of animating the rotation of the map. It turns out I should have animated the camera instead of the map plane. For version 2, I figured out how to tie the buttons to the camera in the scene graph, so buttons are now present. When the TARDIS button is touched, the path appears, and the camera flies in behind the Doctor and faces the start of the path.

Reflection On Using Marvel and Blender

I had originally started this project using InVision, but it only allows images up to 10MB in size to be uploaded. My animations are animated GIFs made from very short movies produced in Blender. Some of the images are several times larger than that. Marvel had no problem with large GIFs at all.

I really love working with Blender, and it’s about as flexible an animation tool as one can find. For situations where very specific behaviors need to be modeled, like the ones I had in this project, it did the job very well. Another benefit of using Blender is that premade meshes of various useful objects can easily be found so that you don’t have to go to the trouble of making them yourself. For example, searching for “Doctor Who Blender”, I was able to find a rather nice mesh of the Doctor to use in the prototypes. The one problem with Blender is that it’s a tool made or detail, not speed. I sunk a lot of time into looking up how to create animations, and tweaking the prototypes. Having gone through this process, I would hesitate to use Blender for anything but the most specific scenes that couldn’t be communicated any other way.

Other Situations and Projects

One use I could see for Blender is to create prototypes of 3D spaces mostly composed of easily found meshes. I found slides from a talk on this subject where the speaker was showing how to use blender for room layouts. These rooms could be scripted to give a walkthrough of an office, or a prototype for a video game.

Of the image-based prototyping tools, I have so far liked Marvel the best, and even paid for a subscription for a few months. It’s flexible, offers touch gestures, and doesn’t complain about my ridiculously huge images. Another cool feature is that there are Android and iPhone apps that let you test out your prototypes right on the phone. InVision had a neat feature where you could text a link to a phone, but it choked on my prototypes. The Marvel app seems to be more reliable.

Homework 7: Mobile-Focused Tool

Reading: Storyboards

Sequential Storyboards

These are a series of images that communicate a sequence of events in a situation. Much like written stories, the images have a setup, set of related events, climax, and resolution.

Each frame in the story board acts like a keyframe in an animated movie, but the frames in between are left out since they represent user activity. How many keyframes to include in the storyboard depends on how much detail can be figured out by the viewer. A long form doesn’t need a frame for filling out every field, but if certain settings aren’t obvious, it’s good to break them out into separate images.

Annotations

Actions depicted in the images can and should be annotated to explain what is going on in more detail. Short notes like “user enters their credit card number and expiration date and clicks submit” can clarify what is going on in the image if it’s not completely obvious.

Images can be enhanced with additional visual information. Arrows can indicate something like the swiveling of a person’s head when they notice something, or motion blur can be used to indicate fast movement.

Reading: Storyboards

Inspiration: Marvel

Marvel, much like InVision, is an on line prototyping tool designed to be used by small teams. Team members may upload images, made in 3rd party software, representing screens in the application. Screens can be given hotspots that represent navigations or different screen states. Marvel also has a “Canvas” that provides basic drawing capabilities. The drawing system is powerful enough to draw blocks of colors and place text. Multiple images can be loaded to the canvas to create composites.

Users with a Marvel account can view the prototypes online, or use the Marvel App on either iPhone or Android phones for a more realistic mobile experience.

I had started using InVision for the first round of prototypes for the Medium-Fidelity Mobile project, but ran into problems when I tried to upload images larger than 10MB. I managed to shrink the images I was using enough to upload, and it seemed to work, but the smartphone version didn’t work at all. A few days later, the images wouldn’t appear even on the desktop. Marvel, on the other hand, was not at all squeamish about large images, and has seemed to be more reliable. The mobile app for Marvel doesn’t seem to accept my GIF animations, but I think I can live with that for now, so long as the desktop version does animate.

Inspiration: Marvel

Homework 6: Medium-Fidelity Desktop-Focused Tool

Context

This time around, I decided to back off a bit in the detail, and focus further on interactions in the prototype. For both tasks, there is a sort of navigation between screens, but the navigation is less link-based, and more state-based.

For the Alarm task, I tried a tabbed interface for the three components of the alarm: time, location, and message. The alarm list is still somewhat separate, and the tabs are shown by touching the new alarm button. The new alarm button has been integrated with an empty  row in the alarms list to convey the idea that the new alarm will appear in place of the button. For future prototypes, I’d like to see some better way of conveying how the time settings are draggable, and how the map is interactive.

In previous iterations, the Homing task felt as if it was a set of screens to be navigated between. In the Doctor’s (science fiction) reality, the homing screen is really just one screen that can show a few things selectively. To that end, I removed the choosing screen from the previous PowerPoint prototypes, and integrated the three buttons into the map. Simply touch a button, and the map updates.

Version 1 to Version 2

For the homing screens, the main change between version 1 and version 2 was the ability to zoom in and out on the map. I only implemented two zoom levels, but the zoom buttons do function.

For the alarm screens, I managed to create enough states in version 2 such that the tabs all work. The new alarm button shows the tabs, and using the save button in the message tab “adds” the alarm to the list by showing two additional rows, and hiding the old blank row. Version 2 feels much more like a working application.

Reflection on Using ProtoShare

ProtoShare was much more powerful than PowerPoint for showing different application states. Each button in the prototype can be made to cycle between two or more states. Individual elements can be shown or hidden based on one or more states. The ability to react to more than one state offers very fine-grained control over what does and doesn’t show. I really liked the state sidebar because it let you set all of the different states in the prototype and quickly find out under what circumstances an element is or isn’t showing.

The main thing I missed from PowerPoint is the ability to draw on the screen. PowerPoint’s drawing tools leave a lot to be desired, but at least they exist. Fortunately, ProtoShare does allow you to upload images, which is how I got around the problem. Still, it would have been nice to have just one map, that always shows, and only change the lines based on the one state, rather than several states being required to interact with sixteen all-in-one images.

The other limitation is that, like in PowerPoint, interaction is strictly limited to clicking and hovering. There is still no way to interact by dragging objects around the screen.

Other Situations and Projects

ProtoShare seems strongest when used with applications that make heavy use of forms. I didn’t manage to figure out a way to “POST” something in a form to another part of the interface, but other than that, forms are quite functional and realistic. The only place lacking realism is the fact that widgets don’t necessarily look like native system widgets.

One thing I was hoping to see with ProtoShare was the ability to do real-time editing with multiple people at once, like most other Google Drive applications. Unfortunately, it doesn’t work that way, but there is still some benefit to being able to have several people working on the same prototype at different times. At least everybody is looking at the same document, rather than different versions floating around in various emails.

Homework 6: Medium-Fidelity Desktop-Focused Tool

Inspiration: Blender

Blender is the Free/Open Source Software community’s contribution to 3D computer art and animation. It fills the same niche as commercial products such as Maya and 3ds Max.

Blender_01.png

While the feature set offered by Blender is far larger than needed for prototyping purposes, there are some very useful features that can be used to quickly develop 2D and 3D representations of animated user interfaces. Many pre-made meshes are available for free on the web, and simple animation is can be achieved relatively easily. The skill level required for prototyping purposes is similar to that of learning something like Photoshop or Illustrator. Photorealistic movies are quite difficult to make, but this level of skill is unnecessary for prototyping.

The main difficulty in using Blender is that the user interface is rather different from that used for 2D image editing software. There are so many functions that learning keyboard shortcuts is very important to achieve any kind of development speed. Because there are so many functions, it can be difficult to find what you need if you don’t already know where it is. Fortunately, Blender’s user interface had a major overhaul a few years ago, and functions are organized much better than they used to be.

One big benefit of using Blender is that it’s got an extremely active community of artists and technicians around it. It’s extremely easy to find a video demonstration of the functionality you’re looking for, and there are some websites, such as CG Cookie, that specialize in training courses. Video demonstrations have become so common that recent releases of Blender include built in screencasting of the user interface.

For my prototypes, I needed a way of illustrating how the date/time selector works, since that’s outside of the functionality of pretty much every prototyping tool. In Blender, I added a bunch of individual text blocks, and animated them using the graph editor. The graph editor is a representation of object attributes over time. Time is represented by individual frames of a movie, and the object attributes, location in my case, are represented by curved lines – one for X position, one for Y position, and one for Z position. Instead of directly manipulating the curves, it’s easy to set a frame in the graph editor, move the objects to the desired position for that frame, and set a key frame. Blender automatically creates a curve to interpolate the position for each frame between key frames.

Blender_02.png

Once all the animations are finished, Blender can render a movie. If an animated GIF is desired, a 3rd party tool such as ImageMagick can be used to convert the movie frames into GIF frames.

Inspiration: Blender

Homework: Medium-Fidelity Using PowerPoint

Context

For this round of prototyping I once again focused on the same two tasks for the Doctor’s watch: setting alarms, and locating objects or people. For the previous set of paper prototypes, I had laid out some basic screen structure, but there had been only cursory thought given to how to transition between different screens. Since PowerPoint does quite well at demonstrating navigation between screens, I decided to focus on that aspect of the screens.

For the alarm task, the Doctor needs to be able to set an alarm to go off when he is in a certain location, or at a certain time, or when both location and time conditions are met. Set alarms appear in a list that is shown when the screen opens. From there, a new alarm can be created, or a previously set alarm can be edited. Alarms are still missing the ability to be deleted, which will have to be addressed in future prototypes. There are navigation buttons that can be pressed when alarm creation/editing is in progress to bring up location and time settings. Messages can be entered to inform the Doctor of what the alarm is for, in case he forgets why it was set. Since he travels across time at will, keeping all these reminders organized is very important.

For the homing/tracking task, the Doctor has transmitters set on several people and objects that he wants to keep track of, including his TARDIS, his traveling companions, and UNIT field offices that he’s working with at the time. At any time, he can call up a map that indicates the directional bearing of the object or person, and a path to travel to get to that location on foot or by vehicle. For this iteration, object selection and map viewing are separate screens that need to be navigated between.

Version 1 to Version 2

The largest changes from version 1 to 2  was in the Homing task screen. At the advice of another student, I dropped the 3D map entirely, and only offered a flat map interface. For future prototypes, I think that navigating the entire universe will still require a 3D interface, but my current working idea coming out of this prototype is to have a seamless 3D view of location on the interplanetary level down to 2D map representation of location as the view is zoomed in to the single planet level.

For the Homing task screen, I also dug deeper into PowerPoint’s ability to convey navigation. In version 1, I just had a set of buttons for each trackable object, with no interactivity represented. For version 2, I made each button pressable. There are now 3 trackable objects, which results in 8 possible states. For each state, I made a button slide with slight modifications to represent selected buttons, and made a corresponding map slide with routes to the tracked objects marked.

The Alarm interface had much fewer changes, but I did manage to fill in a few missing pieces of functionality. Each alarm can be saved with a name, but this functionality was completely missing from version 1. Version 2 corrected that.

Messages for alarms can either be typed in, or can be input by voice which converts the spoken words into text. Version 1’s ear icon seemed confusing to the fellow student I showed it to, and made it a microphone icon in version 2 as she suggested. I also created an interaction on the microphone to indicate when it is and isn’t recording.

Looking ahead to future iterations, I’m thinking that in the homing task, consolidating the object selection and map into one screen will be a more elegant approach.

For alarms, I’m still bothered by ‘typing’ in a 3D holographic screen.

Reflection on Using PowerPoint

I had no idea that this sort of work could be done in PowerPoint. While it’s obviously not great for every situation, it certainly can be used to communicate basic screen structure and very simple functionality. It also seems great for making a series of linked views without having to write HTML or JavaScript.

The main thing I missed when using PowerPoint is that interactions are limited to clicks and mouseovers. I really wished I had a way of dragging shapes around on one screen, especially for the alarm time setting interface.

My other gripe is that for anything that’s not boxy in shape, PowerPoint’s drawing capabilities are limited. In talking with other students, it seems that many of them drew their unusual imagery in Illustrator or Photoshop, imported the images into PowerPoint, and then drew hotspots over the navigable parts.

Still, none of this takes away from PowerPoint’s ability to create basic prototypes that can answer simple questions quickly.

Other Situations and Projects

A friend of mine at work almost always does his prototypes in Visio, but I think I may be able to convince him to try this out. The main problem with using Visio is that there’s no way that I know of to communicate navigation. PowerPoint also seems capable of representing most of the user interface elements that an application needs, and since the two applications are related products, Visio content can probably be easily integrated with a PowerPoint prototype. We’ll have to give it a try next time we’re working together.

Another aspect of this tool for work is that PowerPoint presentations seem to be very common. I’ve often joked that there’s the attitude of, ‘PowerPoint, or it didn’t happen.’ It’s as if a talk or presentation can’t be made without a slide deck, and sometimes, the sole source of documentation for a software package comes in the form of a printed PowerPoint presentation. I think this is really weird, but at least PowerPoint is an accepted form of communication, while not confusing anyone about whether or not it’s a finished product. I think the only thing I’d have to make sure of is that, if emailing the presentation, the people I’m communicating with actually start the presentation rather than just skimming the slides.

Homework: Medium-Fidelity Using PowerPoint

Inspiration: Sketching in Code: the Magic of Prototyping

I found this article on the A List Apart website. The author is making the case that wireframes can’t always communicate interactive features of a web application. Sometimes, there’s no substitute for a coded prototype.

I found this interesting because I’ve been tempted over a few of the prototypes we’ve made for class to just bash out some code, and get the interaction I want. In some cases, especially to test out some very specific interactions, I think this can be useful, if the scope is limited.

My concern with this approach, as the author mentions, is the confusion that prototyping in the delivery medium can bring. We definitely had this problem in one project at work. Prototype was synonymous with ‘soon to be finished project’.

One of the big dangers the author mentions is the tendency to over-engineer the prototype, wasting resources. A coded prototype should do no more than answer a specific question in the fastest method possible.

Inspiration: Sketching in Code: the Magic of Prototyping