After 2.5 days of facilitated brainstorming and prototyping at SNDMakes D.C., 11 teams created projects that addressed the design challenge, “How might we tell better picture stories?” Cross-functional teams of designers, developers, journalists and product managers created projects that largely centered around themes like annotation, contextualization, and using photography to navigate non-linear stories.
— Livia Labate (@livlab) April 10, 2015
Team Mount Pleasant
Jessica Morrison, Asst. Editor, Chemical & Engineering News
Ryan Pitts, Director of Code at Knight-Mozilla OpenNews
Kevin Kepple, Sr. Designer USA Today
Jasmine Wiggins, Digital Designer, National Geographic
Project: Five Story
What we made
We made a tool to help reporters tell compelling short story narratives using only five photos.
Why we made it
We wanted to see tighter, more focused photo essays.
What problem we’re trying to solve
Solve for long, non-curated picture stories. Five photos forces the user to tell a tighter story.
Team Adams Morgan
Arjuna Soriano, Front-end and Newsroom Developer at Marketplace (APM) and Adjunct Professor at USC Annenberg.
Kavya Sukumar, Knight Mozilla ’15 Fellow at Vox Media
Keri O’Brian, Designer at Upstatement
Zach Wise, Interactive Producer and Associate Professor at Northwestern, Knight Lab
Project: Circle The Thing
We prototyped a tool for people to easily annotate images. We made it because it’s silly to have to say “in the upper left of this image, you’ll see the debris from the rocket” or “President Barack Obama (on the left), Congressman Hakeem Jeffries (standing next to him), and Elizabeth Warren (second from right)” and because when pictures float around social media, they often lose their captions and their context.
We mocked up the tool to create embeddable interactive images, demonstrated a working prototype of the audience’s experience on a webpage in which people can click or tap to toggle the annotations on and off, and mocked up the social sharing images that users would be able create straight from the webpage.
If we had had more time, we would have built the actual tool and the social image features.
— claire oneill (@oneillclaire) April 10, 2015
Team Capitol Hill
Ellen Butters, Director, Platform Design / National Geographic
Anne Li, Knight Lab Fellow
Ryan Gantz, Director of UX / Vox Media
Claire O’Neill, Producer / NPR
Julia Smith, Knight-Mozilla Fellow / Center for Investigative Reporting
Inspired by Medium’s commenting structure and using sidecomment.js, we’ve mocked up how emoji annotations might work on a story page. We’ve also considered how the overall reaction to a story, measured in the most prevalent pictorial annotation, might be reflected as a barometer on something like a site homepage. Or work like tagging to give the site content a sortable emoji-based taxonomy.
We did this because comments are rarely constructive. And sometimes you can say it better with an emoji. They’re fun and delightful — and a legitimate visual language. We think there’s potential in a layer of dynamic, audience-generated visual feedback.
The challenge here is the ambiguity of what emojis mean — and why they’re used. With so many open questions about language, meaning and intention, this is the sort of thing that would evolve in the hands of readers/users.
If we’d had more time, we would have built and tested a working prototype with actual humans.
Ashlyn Still, Developer at Atlanta Journal-Constitution
Crystal Gammon, Web Editor at Yale Environment 360
Emily Withrow, Faculty at Medill & Knight Lab
Chloe Magner, Senior Mobile designer at Washington Post
David Leonard, Frontend Developer at Code for America
Project: Road Tryp
We prototyped an app that would allow journalists to grab an existing article off the web to create a new, visually focused experience. Roadtryp would give editors the tools to layer a central narrative and contextual information on their photo stories, creating an engaging story thread that is more engaging than the standard slideshow format. It would also allow readers to choose a visual browsing experience in addition to the traditional reading experience available to them on news websites.
We have a fairly large feature backlog spec’d out in #1. Long-term, big improvements would include:
- Many photos! We’re steamrolling toward multiple photos now, but having a deep, layered experience across a large number of photos (vs. one) is how the app SHOULD work. All in good time.
- More thorough validation of our hypothesis of how users want to consume photo stories: that they can choose a deeper dive where they want, without having the photo real estate overwhelmed by information. * The ability to easily scrape many major news sites, or any site with the appropriate editing UI, to create Roadtryp journeys
- The ability to layer on additional photographs as part of the context layers—photos within photos!
- A dynamic social sharing function, in addition to other reader-driven interaction types (make your own Roadtryp! add your own layer!) that makes the app feel more personal
— Tavo Caballero (@tavocaballero) April 11, 2015
Team Dupont Circle
Ryan Mark, Director of Engineering for Editorial, Vox Media
Tavo Caballero, Designer, Journal Media Group
Coburn Dukehart, Senior Photo Editor, National Geographic
Nicole Zhu, Student Fellow, Northwestern University Knight Lab
Most animated GIFs on the Internet have poor image quality and load times, can only hold a limited number of images, and can be cumbersome to create.
Our team designed FILMSTRIP—a simple, web-based tool that takes a group of photos, animates them without loss of image quality, and creates an embed code that can be used on any site.
The FILMSTRIP tool has a simple interface that integrates with a user’s Dropbox account for image upload, creates a 2-second animation, then generates and saves the files along with an embed code in the user’s Dropbox folder. The animation is of high-quality, low-weight, and is responsive to the size of the screen.
With more time, we would have given the user the option to re-order the frames, manually set the total duration of the animation, adjust the duration of individual frames, add or delete frames, choose whether to loop the animation or only play it once, choose to play the animation forward or backwards, and also preview the animation.
Nevertheless, we are psyched about this tool.
Team Foggy Bottom
Erin Harper, Multimedia editor at the Chicago Tribune
Kelsey Scherer, Designer at Vox Product
Martin McClellan, Senior UX designer, Breaking News
Dave Stanton, Senior technical lead, Mobiquity
Project: Only the caption
We made a website exploring the value of the caption and how to potentially increase that value on a mobile experience. After some brainstorming, we realized captions are handled similarly across the web: small, light, and out of the way. Sometimes credits get lost, and it’s not common to show the metadata. We wanted to explore ways to surface the caption, credit, and interesting metadata in a way that was beneficial to the user.
Our website tells a story to the reader, explaining the value of the caption. It then goes on to show designs of two potential mobile interactions for captions. If we had more time, we would have worked more on live prototypes to test with real users.
— hassan hodges (@mapgoblin) April 10, 2015
Donna Borak, John S. Knight Fellow, Stanford University
Hassan Hodges, Advance Digital
Livia Labate, Knight-Mozilla fellow
Casey Miller, Vox Media
Ben Running, BuzzFeed
Project: Reverse Second Screen
How might we create an immersive experience for users without overwhelming them with information and a variety of content types?
Reverse Second Screen helps set the mood and context of a story through ambient sound and images presented on a secondary screen, while the user is going through the core story content on their mobile phone.
Given the amount of content and external triggers that distract from the reading experience, Reverse Second Screen allows the user to read deeply while minimizing the cognitive burden that detracts from a mindful experience.
Certain stories lend themselves to a more immersive experience that command attention and focus. We envision this to be an experience better suited for longer form features as it helps to set an enduring mood by creating a sensory experience that stays with the user throughout.
— beckyberger (@blettenberger) April 10, 2015
Team Logan Circle
Lenny Bogdonoff, Software Engineer — Conde Nast
Mitchell Thorson, Web Developer — USA Today
Becky Lettenberger, Project Manager — NPR
Robert Hernandez, Associate Professor — USC Annenberg, #WJChat co-founder
Tim Wong, Senior Designer — The Washington Post
Our team created a platform and tool for telling non-linear photo stories. Imagine if Instagram had a baby with the ’90s adventure, puzzle game Myst — that’s the potential. We tackled this as a means to break away from the constraints of linear, paginated galleries. Leaving these navigation paradigms behind we were able to create a simple story authoring system that can generate experiences that scale seamlessly from smaller mobile touch screens to desktop, and potentially virtual reality experiences via Google Cardboard and Oculus Rift. We’re hoping these efforts can help content creators develop and foster deeper connections between their visual content and consumers.
If we had more time …
We would have loved to have tested, optimized and developed full functionality for use with Cardboard, Samsung Gear and Oculus headsets.
Jackie Roche, Freelance writer, illustrator, & cartoonist
Bethany Powell, Director of Mobile Design at National Geographic
Ashley Wu, Student fellow at Knight Lab
Welch Canavan, Developer at National Geographic
Mike Swartz, Partner, Upstatement
We made a tool to navigate non-linear stories using images. Users tap hot points on an image to follow narrative paths. Some images have multiple hot points, representing narratives that branch off. The user decides which path to follow.
Our tool, called Z-Space, fills a need for an alternative to linear storytelling methods for digital picture stories. Z-Space takes advantage of one of the Internet’s strengths: non-linear exploration. Essentially, we wanted to explore ]how might we make a wikipedia-type rabbit hole but with photos.
We chose a complex narrative that could be structured with multiple entrance points, and be explored through interconnected narratives.
If our team had more time, we would fix the aspect ratios of the images to respond better to portrait versus landscape oriented images, automate connections, make the back and forward buttons in the browser work, create a node view rather than a list, and implement responsive images.
— kainazamaria (@kainazamaria) April 9, 2015
Tyler Fisher, News Apps Developer, NPR
Kaeti Hinck, Design Director, INN
Kainaz Amaria, Supervising Editor, Visuals, NPR
Michael Grant, Web Designer, San Francisco Chronicle
Kevin Koehler, Automattic
Project: Bird’s Eye
Using the concept of a photoroll, our prototype will allow a user to view a large set of photos and filter based on tags like color, emotion, location, and more. Editors can indicate featured images that will be called out in the interface more prominently than the smaller series of pictures. The user can click or press to expand any of the images. Our hope is that showing a large set of photos will provide important context and location awareness, and allow users to immerse themselves into an event.
With more time, we would make the app more performant, build an algorithm to detect the dominant color of a photo automatically, make filtered grids sharable, build a backend authoring interface beyond Photo Mechanic, Lightroom, break up the stream of images by time.
Jeremy Bowers, New York Times
Chris Combs, National Geographic
Hilary Fung, Huffington Post
Bryan Perry, CNN
We created Gloss, a photo annotation tool. Gloss allows users to select a region of a photo and comment on it, or to browse through other people’s comments. Gloss aims to make it easy for photo editors and community members to point out interesting details on a picture, for people on any type of device to see these details, and for people to come together and have a conversation around detailed photos.
Mockup of what’s to come: https://projects.invisionapp.com/share/NB2NXJ4S3#/screens
This cohort would not have been possible without the partnership of Vox Product, and generous financial support from the Dow Jones News Fund, the Scripps Howard Foundation, EEJF, and the John S. Knight Foundation.
— Tavo Caballero (@tavocaballero) April 11, 2015