Alexa enabled smart camera and connected experience

Aberrator Mark I — Personal Project

Created March—April, 2017

Multimodal, Interaction Design, UX, Physical/Digital & IoT, Voice UI, Product Design

The Task: Design a multi-device, seamless, Alexa powered experience and showcase these interactions through a “Wizard of Oz” prototype.

Timeline
March 28th — April 27th

I was given an extremely short turn-around time to design a multimodal connected experience based on the Amazon Alexa platform. The key here was to validate an early stage idea and to get to a prototyping stage as quickly as possible.

I followed some of Amazon’s conventions, such as writing a press release before designing anything, in the process of orchestrating this experience. This allowed me to validate the idea: if it can't make it past the press release, you need to go back to the drawing board.


The Wizard of Oz Prototype

The ecosystem includes: A smart DSLR camera, a smartphone application, a desktop application, and an Alexa Skill

My First Steps

Before diving into designing an experience, I had to define who this would be for, what problem needed to be solved, where this engagement would take place, when the user would interact with this and why this issue needs to be addressed. Outlining those points not only gives me context to work in but the motivation to solve the problem. It provides a launching point that allows the rest of the process to be seamless.

I landed on designing an experience for photography because I felt it opened up a lot of possibilities while also allowing me to operate within a set of specific limitations. But before going further, I had to know...

1. Who is my demographic?

The demographic was super important to outline, as my demographic is a fairly specific group of power users. Did my demographic include all photographers? Fashion? Food? Art? Where did it begin and end, and what needs do they have?

2. What needs/problems can I solve?

What’s the point of making technology without actually solving for need? To find a problem worth solving, I had to ask photographers what their problems were.

3. How do I solve them?

That’s the real trick, isn’t it? I had to ask how I could solve these problems through a multimodal experience if that were the best solution for the problems I'd found, and what the best implementation should be.


Press Release

The first step to designing in an "Amazonian" way is writing a press release, or so we've been told. So, that's what I did. I wrote a full press release describing the product and what it does before designing a single thing. Here's a snippet:

"The Aberrator Mark One levels your photography game up Alexa has been elevated from an everyday voice assistant to a powerful and trustworthy professional photography with the Aberrator Mark One. Professional photographers paired with the internet connected DSLR, Aberrator Mark One, can conversationally ask their camera to upload their photos to a cloud service, change internal camera settings, release the shutter remotely, set up long exposure photos, and so much more."


Orchestration and Information Architecture

In a multimodal experience, the orchestration is the most crucial piece of the puzzle. And trust me, it is a puzzle. What should be happening when and why? In my case, I designed a happy path. There are no error states, or divergent paths, being considered in this preliminary version.

I started on sticky notes then graduated to a lucid chart document.

Click to see the full version

Next up was establishing the Information Architecture. Based on the work done in the orchestration, I developed a logical architecture across the experience. In VUI specifically, the IA is make-or-break for a product. (Specifically due to cognitive load) In this first pass for IA I attempted to answer as many problems as possible, but given more time I would relentlessly test this and validate it.

Click to see the full version

UX Components

After establishing what this ecosystem needed, I had to have a rationale as to why and to explain any technical specifications required. I'll spare you the lists of technical specs and functionalities, and provide justifications for you.

The Camera

The camera is the main point of interfacing for the user. All the functionality for the product comes from this. It’s super important that this all be buttoned up and serve the purpose of making their jobs easier and allowing them to focus.

Alexa Skill

This functionality is important because the user can ask these questions while they’re preparing for the day or packing up their camera. They don’t have to worry about sitting down and stopping what they’re doing to find out quick bits of information.

The Mobile App

This functionality takes this smart camera from “hey that’s pretty neat” to “hey that’s a powerful tool I could use.” Specifically, being able to have access to your camera in situations like nature photography or when taking long exposure photos.

The Desktop App

This feature is essential for the demographic this is targeting. The user needs to be able to sit down and edit these photos, and giving them quicker access is very important. This also needs to integrate into their workflow, and not interrupt it.


Demographic research

This made up a large chunk of my project’s research phase. I began by searching for demographic information on professional photographers from sources like the Bureau of Labor Statistics and AI-PI, This info helped me shape my target demographic. It’s hard to think of a use case for a voice-activated camera for a photographer in a studio, but how much of the market of photographers are out and about? That was the primary piece of information for which I was looking.

66% of photographers say they specialize in Portrait photography. Photographers commonly claimed more than one specialization, however. My area of focus is in people getting out of the studio. That includes event photographers (33%), Photojournalism (26%), nature (24%), travel (16%), and some fine art (35%) depending on the type of photos they take. (source)

Another important consideration moving forward would be compatibility with other lenses and systems. Photography equipment isn’t cheap and having to repurchase everything would be a massive deterrent to switching over.

This proto-persona was the result of the findings in these research phases. This persona is loosely based on real data, but non-specific and not directly linked to the product.


Market research

I looked at several reviews of existing smart cameras. This research informed the type of problems to avoid. For instance, I learned that the Canon 70D doesn’t auto-update your timezone, despite being wifi-enabled. That fix is super simple.

While a lot of reviews cover the industrial design and engineering of the camera, some bits of user interface information do slip in here and there. Specifically, a video by DigitalRev TV on YouTube covers the connection of the 70D to the iPhone app.

I also took the time to search for camera UI best practices. The topic was thin and rare, but what I did find helped me out some. A startup had written a Medium article about how they designed a camera UI that allowed the user to interact directly with the image to change settings as they were composing the shot. Most searches turned up mobile app results, rather than professional DSLR results.


Need finding

Real UX happens when reading Buzzfeed listicles with accompanying gifs.

For research purposes only, I found myself reading Buzzfeed articles and blog posts by photographers complaining about everyday annoyances. I also did some research on the Magic Lantern camera OS.

My goal was to identify the sorts of problems photographers have and how they fix them currently, if at all. The most common complaint was the weight of the cameras and gear they find themselves carrying. After that, it tended to be complaints about dealing with clients. If this device were able to make those sorts of problems more manageable, we’d be in a good spot.

Magic Lantern exists because photographers want to get more out of their camera than what comes stock on them. Specifically, a lot of options are added to the video functionality of the camera. The UI itself isn’t flashy, but the functionality it adds seems to be very desirable. Overall, it appears that most photographers have a form of Stockholm Syndrome with their cameras interfaces.

My big takeaways: Carrying camera gear is annoying, and most photographers just want more options on their cameras.


Surveys

4/6 Were very interested in the product,
One was concerned about connectivity when out shooting.
One wasn’t interested at all.

From this survey, I got great insight into potential functionality and their current pain points. I sourced and surveyed six professional photographers in my circle and had them answer my quick survey. I made it a short survey to get more detailed and thought out responses to the questions I did ask. These answers mostly informed the end design and next steps in this project.

Survey Script:

Let's say that there was a "smart" camera that was Amazon Alexa enabled that came with a companion phone app as well as an Alexa Echo
"Skill" (which is an app for Alexa, if you didn't know)
Would you at all be interested in something like that?
What kind of features would you expect it to have?
What's your least favorite thing about existing camera interfaces/menu systems?


Research Synthesis

Some of the most significant takeaways were in the form of feature ideas, as well as different expectations of the functionality and the way users will interface with the camera and ecosystem.

One user said he would expect the mobile app to operate like the Go Pro app and would love to see how much more he could control his camera remotely. This insight pushed me to incorporate more of those sorts of features into the app. Features such as changing settings and releasing the shutter, as well as being able to view photos and download them.

The Camera UI was mostly informed by the market research and cultural analysis of photographers and their common complaints. Additionally, looking at the mobile app that allowed users to directly interact with the image to change things like focus point, exposure, and more informed those features on my prototype. This research overall indicated that there is a desire for robust, connected cameras and that the market does leave some to be desired.

Action Items:

  1. Improve photographer’s confidence in the system
  2. Better communicate phone and camera connectivity
  3. Add more features to camera UI
  4. Run more user tests


Next Steps

Planned Features

Making data more meaningful and useful for the user. High-end cameras already collect a ton of data when they take photos, from camera settings and light information to GPS location. Designing features that make powerful use of that data would set this device apart. In the current iteration we have things like the Golden Hour calculator which uses your GPS location and timezone data to determine when Golden Hour is, and the map feature which tracks where your photos were taken so you can go back to those places in the future.

An exciting recommendation was to include a feature where photographers could add a note to a photo with their voice, instead of having to write it down. This is a common workflow for many travel photographers, so allowing them to do that easier could be powerful, especially for my golden target demographic of travel photographers who also blog about their experiences. Moving forward, I would like to add more feature like this.

Tests with Professional Photographers

It’s important to talk to your demographic and test your product with them. Moving forward, I would like the opportunity to sit down and watch them use their gear in the way that’s most natural to them. This would help inform me of inefficiencies and problems in tandem with the features that photographers use the most.

As it is an art, I’m sure a lot of photographers have different ways of working, so sitting down with a variety of them will be very important. After doing some more observational research, I’d love actually to test the multimodal experience with an actual camera. This obviously may not be possible due to my limitations as a developer (I am not one), so the next best thing would be testing a prototype and having the photographers pretend they are taking photos. The environment in which these tests take place is crucial, as that informs how photos themselves are taken. In a perfect world, I’d be able to go out with them to their favorite spots to shoot to run these tests.


Retrospective

A ton of my time was spent on the interaction concepts and early-stage research, and not much on actually designing out the product’s interactions with prototypes. I want to, in the future, pursue projects like this with those things in mind. Apps are obviously powerful tools, but the concept of designing something that is an end to end experience is super interesting and difficult, while also standing as the primary difference between designing only in one lane.

Designing for a particular power-user centric device like a DSLR was a new and unique challenge. The target demographic consists of professionals with a lot of demands and expectations, and the technology itself is specific. Getting a good grasp on what the best practices are while also trying to wrap my head around the functionality and purpose of everything was very exciting.

Thanks for reading.


Since you made it this far, You may be interested in this project.

Stars Wars Force Facts — Personal Project