March 28th — April 27th
I was given an extremely short turn-around time to design a multi-modal connected experience based on the Amazon Alexa platform. The key here was to validate an early stage idea and to get to a prototyping stage as quickly as possible.
I followed some of Amazon’s conventions, such as writing a press release before designing anything, in the process of orchestrating this experience. This allowed me to quickly validate the idea: if it can't make it past the press release, you need to go back to the drawing board.
The ecosystem includes: A smart DSLR camera, a smartphone application, a desktop application, and an Alexa Skill
Before diving into designing an experience, I had to define who this would be for, what problem needed to be solved, where this experience would take place, when the user would interact with this and why this problem needs to be solved. Outlining those points not only gives me context to work in but the motivation to solve the problem. It provides a launching point that allows the rest of the process to be seamless.
I landed on designing an experience for photography because I felt it opened up a lot of possibilities while also allowing me to operate within a set of specific limitations. But before going further I had to know...
This was super important to outline, as my demographic is a fairly specific group of power users. Did my demographic include all photographers? Fashion? Food? Art? Where did it begin and end, and what needs do they have?
What’s the point of making technology without actually solving for a need? To find a problem worth solving, I had to ask photographers what their problems were.
That’s the real trick, isn’t it? I had to ask how I could solve these problems through a multimodal experience if that was the best solution for the problems I'd found, and what the best implementation should be.
The first step to designing in an "Amazonian" way is writing a press release, or so we've been told. So, that's what I did. I wrote a full press release describing the product and what it does before designing a single thing. Here's a snippit:
"The Aberrator Mark One levels your photography game up Alexa has been elevated from an everyday voice assistant to a powerful and trustworthy professional photography with the Aberrator Mark One. Professional photographers paired with the internet connected DSLR, Aberrator Mark One, can conversationally ask their camera to upload their photos to a cloud service, change internal camera settings, release the shutter remotely, set up long exposure photos, and so much more."
In a multimodal experience, the orchestration is the most important piece of the puzzle. And trust me, it is a puzzle. What should be happening when and why? In my case, I designed for a happy path. There is no error states or external factors being considered in this preliminary version.
I started on sticky notes then graduated to a lucid chart document.
Next up was establishing the Information Architecture. Based on the work done in the orchestration, I established a logical architecture across the experience. In VUI specifically, the IA is make-or-break for a product. (Specifically due to cognitive load) In this first pass for IA I attempted to answer as many problems as possible, but given more time I would relentlessly test this and validate it.
After establishing what this ecosystem needed, I had to have a rationale as to why and to explain any sort of technical specifications needed. I'll spare you the lists of technical specs and functionalities, and provide justifications for you.
This is the main point of interfacing for the user. All the functionality for the product comes from this. It’s super important that this all be buttoned up and serve the purpose of making their jobs easier and allowing them to focus.
This functionality is important because the user can ask these questions while they’re preparing for the day or packing up their camera. They don’t have to worry about sitting down and stopping what they’re doing to find out quick bits of information.
This functionality takes this smart camera from “hey that’s pretty neat” to “hey that’s a powerful tool I could use”. Specifically, being able to have access to your camera in situations like nature photography or when taking long exposure photos.
This feature is super important for the demographic this is targeting. The user needs to be able to sit down and edit these photos, and giving them quicker access is very important. This also needs to seamlessly integrate into their workflow, and not interrupt it.
This made up a large chunk of my project’s research phase. I began by searching for demographic information on professional photographers from sources like the Bureau of Labor Statistics and AI-PI, This information helped me shape my target demographic. It’s hard to think of a use case for a voice-activated camera for a photographer in a studio, but how much of the market of photographers are actually out and about? That was the main piece of information that I was looking for.
66% of photographers say they specialize in Portrait photography. Photographers commonly claimed more than one specialization, however. My area of focus is in people getting out of the studio. That includes event photographers (33%), Photojournalism (26%), nature (24%), travel (16%), and some fine art (35%) depending on the type of photos they take. (source)
Another important consideration moving forward would be compatibility with other lenses and systems. Photography equipment isn’t cheap and having to repurchase everything would be a huge deterrent to switching over.
This proto-persona was the result of the findings in these research phases. He's loosely based on real data, but non-specific and not directly linked to the product being built.
I looked at several reviews of existing smart cameras. This research informed the type of problems to avoid. For instance, I learned that the Canon 70D doesn’t auto-update your timezone, despite being wifi-enabled. That fix is super simple.
While a lot of reviews cover the industrial design and engineering of the camera, some bits of user interface information do slip in here and there. Specifically, a video by DigitalRev TV on YouTube covers the connection of the 70D to the iPhone app.
I also took the time to search for camera UI best practices. The topic was thin and rare, but what I did find helped me out some. A startup had written a Medium article about how they designed a camera UI that allowed the user to interact directly with the image to change settings as they were composing the shot. Most searches turned up mobile app results, rather than professional DSLR results.
This literally included me reading Buzzfeed articles and blog posts by photographers complaining about common annoyances. I also did some research into the Magic Lantern camera OS.
My goal was to identify the sorts of problems photographers have and how they fix them currently, if at all. The most common complaint was the weight of the cameras and gear they find themselves carrying. After that, it tended to be complaints about dealing with clients. If this device was able to make those sorts of problems easier, we’d be in a good spot.
Magic Lantern exists because photographers want to get more out of their camera than what comes stock on them. Specifically, a lot of options are added for video. The UI itself isn’t flashy but the functionality it adds seems to be very desirable. Overall, it seems that most photographers have a form of Stockholm Syndrome with their cameras interfaces.
My big takeaways: Carrying camera gear is annoying, and most photographers just want more options on their cameras.
From this survey, I got great insight into potential functionality and their current pain points. I sourced and surveyed 6 professional photographers in my circle and had them answer my quick survey. I made it super quick in order to get more detailed and thought out responses on the questions I did ask. These answers largely informed the end design and next steps in this project.
Let's say that there was a "smart" camera that was Amazon Alexa enabled that came with a companion phone app as well as an Alexa Echo
"Skill" (which is basically an app for Alexa, if you didn't know)
Would you at all be interested in something like that?
What kind of features would you expect it to have?
What's your least favorite thing about existing camera interfaces/menu systems?
Some of the biggest takeaways were in the form of key feature ideas, as well as different expectations of the functionality and the way users will interface with the camera and ecosystem.
One user said he would expect the mobile app to operate like the Go Pro app and would love to see how much more he could control his camera remotely. This insight pushed me to incorporate more of those sorts of features into the app. Features such as changing settings and releasing the shutter, as well as being able to view photos and download them.
The Camera UI was mostly informed by the market research and cultural analysis of photographers and their common complaints. Additionally, looking at the mobile app that allowed users to directly interact with the image to change things like focus point, exposure, and more informed those features on my prototype. This research overall clearly indicated that there is a desire for powerful connected cameras and that the market does leave some to be desired.
Making data more meaningful and useful for the user. High-end cameras already collect a ton of data when they take photos, from camera settings and light information to GPS location . Designing features that make powerful use of that data would set this device apart. In the current iteration we have things like the Golden Hour calculator which uses your GPS location and timezone data to determine when Golden Hour is, and the map feature which tracks where your photos were taken so you can go back to those places in the future.
An interesting recommendation was to include a feature where photographers could add a note to a photo with their voice, instead of having to write it down. This is a common workflow for many travel photographers, so allowing them to do that easier could be really powerful, especially for my golden target demographic of travel photographers who also blog about their experiences. Moving forward, I would like add more feature like this.
Obviously, it’s super important to talk to your demographic and test your product with them. Moving forward I would like the opportunity to sit down and watch them use their gear in the way that’s most natural to them. This would help inform me of inefficiencies and common problems in tandem with the features that photographers use the most.
As it is an art, I’m sure a lot of photographers have different ways of working, so sitting down with a variety of them will be very important. After doing some more observational research, I’d love to actually test the multi-modal experience with an actual camera. This obviously may not be possible due to my limitations as a developer (I am not one) so the next best thing would be testing a prototype and having the photographers pretend they are taking photos. The environment in which these tests take place is crucial, as that informs how photos themselves are taken. In a perfect world, I’d be able to go out with them to their favorite spots to shoot to run these tests.
A ton of my time was spent on the interaction concepts and early-stage research, and not much on actually designing out the product’s interactions with prototypes. I want to, in the future, pursue projects like this with those things in mind. Apps are obviously powerful tools, but the concept of designing something that is literally an end to end experience is super interesting and difficult, while also standing as the primary difference between designing only in one lane.
Designing for an extremely specific power-user centric device like a DSLR was a new and unique challenge. The target demographic consists of professionals with a lot of demands and expectations, and the technology itself is very specific. Getting a good grasp on what the best practices are while also trying to wrap my head around the functionality and purpose of everything was very exciting.
All in all, this work stands as some of my most proud work.