“How might the application of user experience design processes be leveraged in mitigating social media users’ consumption of misinformation?”
In order to have well informed Americans my product will solve social media users’ problem of consuming false and misleading news stories by giving them a platform for analyzing their news intake and output as well as news consumption bias.
I will know if this product is successful when users make proactive measures to reduce their false news intake.
The idea behind a user story is to write a simple sentence that states a function the user can experience inside the application. They sometimes follow slightly different formulas, but the most standard is "as a user. I can..." followed by an intent or action the user is able to perform in the system.
Doing this allows you to get on the same page as a team. In my case, I prefer starting with user stories and then designing out the user flow/task flow as it gives me a solid starting point.
Jumping into designing an app without these pieces to fall back on feels a lot like writing a book without an outline.
“As a news consumer, I can receive notifications informing me of alternate news sources to stories I share.”
“As a news consumer, I can add news stories to my reading list.”
“As a news consumer, I can share stories through a filter to ensure they are valid.”
“As a news consumer, I can see my news consumption bias over time.”
The only way a product like this can be successful is if it meets the users where they are. News media is continually being disrupted and mobilized. To be useful, this platform has to be where the users are.
In 2018 a lot of people get their news from social media. Facebook and Twitter are aware of this and have launched products to facilitate those interactions. (Twitter Moments, Facebook Trending) Additionally, the primary point of misinformation proliferation is through these services.
Therefore, this product necessitated both a Facebook and Twitter integration. However, a large portion of users also get their news from other sources such as Reddit, general web browsing, and specific news organization’s mobile apps. In order to assist users in assessing the quality of the content they are consuming, this product has to grow beyond simple account linking.
My solution to this was the ability to filter any article or news source proactively through the Lowdown platform.
This can be done much like sharing a link to Facebook or Twitter in iOS. In the action pane that comes from the bottom, you are able to set your app to have an “action” icon.
An example of an application that does this in a similar fashion is Pocket (getpocket.com) which allows users to tag and store interesting links in their profile.The user can open the sharing window, tap Lowdown and instantly identify whether the story is trustworthy or not. If it’s not, they will receive recommended articles to read. If it is safe, they will be given the option to share it.
Adding this layer on top provides a proactive functionality to the application while also remaining a reactive safety net in the instance that the user shares a disputed article directly to their social media profiles.
iPhone X, Amazon Alexa, Apple Watch
Interaction and user flows
Multimodal Interaction Flow
Mobile App Flow
Watch App Flow
Click to See More
Alexa, open Lowdown.
Let's get you the lowdown.
You can say commands such as: "what are my news stats", "read me my bookmarks", or you can ask for help.
What are my stats?
It looks like you have been reading high-quality news lately.
78% of your news has had a left-leaning bias.
What else can I do for you?
Happy to help! You can ask for your news consumption stats, or you can have me read your bookmarks to you. This skill requires a Lowdown account. What can I do for you?
What are my stats?
The first bookmark is titled: "There is not an alien invasion in Oregon" via the New York Times. Do you want to listen to this article or hear about the next one?
Considering skeptics and careful users
The absolute most common question I’ve gotten from my mentors, users, and peers alike has been this:
“How do I know your platform is trustworthy?” When designing the various systems inside Lowdown, I found it very beneficial to consider the opposite perspective. "Who might disagree with what I'm designing?"
Doing this lead me to a few unique design decisions that are worth highlighting:
— The addition of Frequently Asked Questions that provide details into how articles are rated and what the intent is behind the application itself
— The ability to disagree with an analysis result, and submit a reason
— Providing explanations for why we are asking them to connect their social media accounts
— NOT telling the user that an article is outright fake; instead providing them the tools to investigate on their own
I considered using the behavioral economic principals of social proof, however, I opted out of this because it seems to undermine the intention of Lowdown in the first place.
Social proof is essentially showing that "other people do this and it's cool for you to do it too". In this instance, it would have manifested itself as something like "Your friend's liked this article" or something similar.
Instead, I found it better to get in the user’s line of sight with notifications and friendly nudges in the right direction, rather than smashing their perceptions aggressively.
Doing so would only result in belief persistence and turn away any careful or skeptical users. The likes of which may be the most important demographic to reach in the first place.