Building a Cross Platform 360-degree Video Experience at The New York Times

Over the past few months, 360-degree videos have gained a lot of traction on the modern web as a new immersive storytelling medium. The New York Times has continuously aimed to bring readers as close to stories as possible. Last year we released the NYT VR app with premium content on iOS and Android. We believe VR storytelling allows for a deeper understanding of a place, a person, or an event.

This month, we added support for 360-degree videos into our core news products across web, mobile web, Android, and iOS platforms to deliver an additional immersive experience. Times journalists around the world are bringing you one new 360 video every day: a feature we call The Daily 360 .

The current state of 360 videos on the Web

We’ve been using VHS, our New York Times Video Player, for playback of our content on both Web and Mobile Web platforms for the last few years. Building support for 360 videos on those platforms was a huge challenge. Even though the support for WebGL is relatively mature nowadays, there are still some issues and edge cases depending on platform and browser implementation.

To circumvent some of those issues, we had to implement a few different techniques. The first was the use of a “canvas-in-between”: We draw the video frames into a canvas and then use the canvas to create a texture. However, some versions of Microsoft Internet Explorer and Microsoft Edge are not able to draw content to the canvas if the content is delivered from different domains (as happens with a content delivery network, or CDN), even if you have the proper cross-origin resource sharing (CORS) headers set. We investigated this issue and found out that we could leverage the use of HTTP Live Streaming through the use of an external library called hls.js to avoid this problem.

Safari also has the same limitation regarding CORS. It seems to have been an issue in the underlying media framework for years and for this scenario, the hls.js workaround doesn’t solve the problem. We tackled this issue with the combination of two techniques:

Read more...

Using Microservices to Encode and Publish Videos at The New York Times

Video publishing at The Times is growing
For the past 10 years, the video publishing lifecycle at The New York Times has relied on vendors and in-house hardware solutions. With our growing investment in video journalism over the past couple of years, we’ve found ourselves producing more video content every month, along with supporting new initiatives such as 360-degree video and Virtual Reality. This growth has created the need to migrate to a video publishing platform that could adapt to, and keep up with, the fast pace that our newsroom demands and the continued evolution of our production process. Along with this, we needed a system that could continuously scale in both capacity and features while not compromising on either quality or reliability.

A solution
At the beginning of this year, we created a group inside our video engineering team to implement a new solution for the ingesting, encoding, publishing and the syndication of our growing library of video content. The main goal of the team was to implement a job processing pipeline that was vendor agnostic and cloud-based, along with being highly efficient, elastic, and, of course, reliable. Another goal was to make the system as easy to use as possible, removing any hurdles that might get in the way of our video producers publishing their work and distributing it to our platforms and third-party partners. To do that, we decided to leverage the power of a microservices architecture combined with the benefits of the Go programming language. We named this team Media Factory.

The setup
The first version of our Media Factory encoding pipeline is being used in production by a select group of beta users at The New York Times, and we are actively working with other teams to fully integrate it within our media publishing system. The minimum viable product consists of these three different parts:

Acquisition: After clipping and editing the videos, our video producers, editors, and partners export a final, high-resolution asset usually in ProRes 442 format. Our producers then upload the asset to an AWS S3 bucket to get it ready for the transcoding process. We implemented two different upload approaches:

Read more...

We Went to the Grace Hopper Celebration. Here’s What We’re Bringing Back


15,000 attendees packed Houston’s Toyota Center for the opening of the Grace Hopper Celebration of Women in Computing. Tessa Ann Taylor/The New York Times

Members of The New York Times Developers recently made their first group trip to the Grace Hopper Celebration. At 15,000 attendees, GHC is the world’s largest gathering of women in computing. We chose it because nowhere else could we find so many women software engineers coming together to talk about what we do with technology, and what it’s like to work as a woman in technology.

The conference was overwhelming and…dare we say it…awesome. Not only did we meet hundreds of women from all sorts of backgrounds, industries and levels of experience, we ourselves got the opportunity to let people know the breadth and depth of our work. Those things alone made the experience worth it.

It is rare to see so many women technologists all at once, and the experience made us reflective in a way that felt important to share. So below, some thoughts from some of the team that attended.


I came to the Grace Hopper Celebration representing The New York Times with the hope that my presence and interactions as an underrepresented woman of color could encourage women of all shades and labels to continue exploring roles in technology. What I got in return was that plus so much more. Not only did the conference re-energize my love for all things code, it solidified the importance of being a role model for engineers who are also women of color. It was gratifying to have young women come up to me and tell me how reassuring it was to see a face that looks a lot like theirs talking to them about what it is like being an engineer at The New York Times. The conference also ignited a firestorm of ideas for exploration in solving civic and social problems using data and diversity considerations in natural user interactions for emerging technologies.

I learned that I am capable of a lot more than I may give myself credit for and that I can use my vast experience as both an engineer and an artist to become a person of influence. More importantly, I discovered there is still so much work to do to advance women in technology, so many open questions that need to be answered, and many conversations that need to be had. Looking ahead I want to continue searching for gaps in diversity that have yet to be bridged and help the The New York Times diversity initiative stay dynamic and progressive, while continuing to raise the bar with innovative thinking.
  ?Corina Aoi, Software Engineer, Home team

Read more...

Introducing kyt — Our Web App Configuration Toolkit

Fire emoji Welcome to configuration hell

Every sizable JavaScript web app needs a common foundation: a setup to build, run, test and lint your code. Fortunately, there’s a multitude of tools to assist you, but they have one downside: extensive configuration. It’s not uncommon to see a combined several hundred lines of configuration and script before you can start building your product. Typically, you’ll need the following configurations: transpiler, server build, client build, test, style and script linting and several scripts to tie those tools together. To make matters worse, configuration can lead to a complicated matrix of dependencies, where one minor change can cause bugs with cryptic errors and waste hours of time spent on debugging and searching the internet.

As a consequence of this configuration hell, boilerplates have become a popular resource to start an app. The most significant benefit to using a boilerplate is getting to more quickly start a new project with an opinionated toolset. There are many ways to cut a client-side app, which frustrates users from the outset.

While boilerplates make setup easy, they become problematic soon after you start using them. They dump several hundred lines of configuration into your app. What made initial startup easy thereby becomes burdensome and it is now the developer’s responsibility to understand and maintain hundreds of lines of code written by someone else, which is brittle and time consuming.

kyt logo

Introducing kyt: your escape from configuration hell

There is a need for a tool that exists in between large boilerplates and their customizable toolsets. That’s why we built kyt (pronounced “kit”). kyt is designed to abstract away complicated configurations and allow developers to focus on writing their source code, while still having the power to make important choices about their app. It provides a solid base for building web apps in Node, while being flexible enough to be useful for a variety of projects.

Read more...

Testing Varnish Using Varnishtest

I work on the Content API team at the New York Times and we have a lot of legacy code. Over the past year I spent months modernizing our platform for our continuous delivery initiative. If you have a lot of legacy code, and the people before you weren’t developing with continuous delivery practices, chances are you have your own fair share of challenges. One of the tougher challenges I faced was learning how to test the systems I inherited, and one of those platforms I inherited was our Varnish stack.

Varnish (Varnish Cache) is a caching proxy server that’s full of features. It’s written in the C language and has its own “Varnish configuration language,” VCL. VCL has a lot of features you can adjust to make Varnish do what you need it to do, but not everything. If you want to add features, you can either create your own module, or you can add a feature using inline C. When our VCL was created, we included inline C, which is frowned upon now. C is powerful, but it’s also easy to make big mistakes and it isn’t a language our team uses often. So, after careful analysis, I determined that I could replace the inline C with VCL, which would make the VCL easier to read and maintain. But I needed a good way to test my changes.

Testing VCL, which is necessary for continuous delivery, was painful in the beginning. When I started testing, I would start up a development server, make the modifications to the VCL, restart varnish, use curl to make a request and tail the logs or examine the output to verify everything worked correctly. Pretty painful. I did some digging around and eventually discovered there’s an easier way. Learning how to test VCL wasn’t easy, but it can be, and that’s what I want to share with you in this post; how to test varnish VCL.

Let’s imagine a real-life feature and how we can go about testing the feature. Programmers use jQuery and sometimes turn on jQuery cache busting. When enabled, jQuery adds a timestamp, e.g., _=1331829184859 , to the query string in an attempt to bust the cache. If I strip the query string parameter, I can prevent jQuery from busting our cache. Here’s one way I could clean our URL using VCL:

Read more...

Putting {Style} into the Online New York Times Stylebook


In 1895, the editors of The New York Times created the inaugural version of the paper’s Manual of Style and Usage ? a guidebook to the publication’s particular rules of grammar, punctuation, spelling and capitalization that remains an essential part of our newsroom toolkit. Since then, it’s been updated regularly to reflect the changing times (the word email, for example, appeared as early as 1985 and was styled as “e-mail”). In 1999, the first online version of the manual, known as the Stylebook, became available on the NYT intranet.

In 2013, when I was working as a mobile product designer for The Times, I helped to create an iPhone-only “app” version of the manual. This was a step in the right direction, but I wanted to do even more. I was interested in creating a new version of our living document that was more modern, accessible and usable.

So, in 2015, I started to reimagine and redesign the Stylebook as a fully responsive web app ? one that could be used on any device, regardless of platform. Along the way, I considered the importance of search, ease of use, and of course, typographic elegance. I designed a desktop version, tablet version and phone version, all maintaining the same functionality.

Then came along this year’s Maker Week. During the kickoff meeting, I mentioned this project and asked if anyone would be interested in helping me push it further along. Sure enough, a flurry of emails started coming in. People from different departments, disciplines and backgrounds, including some I had never met, ended up forming the team. Over the course of five days, the Stylebook team (Chris Ladd, Nina Feinberg, Oliver Hardt, William Davis, Marie-France Han, Hamilton Boardman and myself) was able to build out a beautiful, fully-functioning prototype, complete with feature enhancements that are crucial to modern-day newsroom usage:

  • Clean, legible typography
  • Fully responsive web app
  • Deep linking to entries

Newsroom editors have started using the prototype and are giving us plenty of feedback ? we’ll be using this to continue to make improvements and resolve issues. We’re very excited about what we’ve created so far and know that it wouldn’t have been possible without all of the work that was done on the original version by Walt Baranger, Tom Brady, Bill Connolly, Ray Lewis, Merrill Perlman, Al Siegal, Keith Urban and Ted Williamson.

Why We Should All Digest Our Data

The shift from reading news in print to reading news online has dramatically increased the data available about our products. With print, user engagement is opaque ? if you recycle your Sunday paper without reading it, we’ll never know. With digital products, it’s increasingly easy to collect data on how users interact with online articles, smartphone apps, or email newsletters.

However, a mere increase in statistics collection is no guarantee of accurate interpretation or insight. The challenge now lies in effective analysis and distribution of data in a noisy environment. Big data may be a hot, new buzz phrase of the digital future, but the critical question is unchanged: How do we understand our metrics and use them to better our products?

As a software intern at The Times this summer, I’ve been lucky to work on a project that both enables data insights and serves as a nice example of how product and technology teams can support our mission in the newsroom. Working on the NYT Email team, the team responsible for the internal email platform that supports our popular, free email newsletters , I built a software package to process some of the statistics we already track.

My package, endearingly dubbed Stats Digest, detects changes in basic email engagement metrics ? like open and click rates for each instance of a newsletter product ? as well as large changes in subscriber counts. When a change is deemed significant, an email alert notifies a relevant person, likely a newsletter producer, of the change. The sensitivity in detecting changes is customizable and dynamic, though the math is not complicated. Essentially, I’m looking at averages and day-to-day deviations to determine what constitutes a significant change. It’s conceptually simple, but the potential impact on the usability of these metrics is huge.


An example of an email alert sent when the engagement or subscriber statistics for a product deviate from the expected average beyond a particular threshold.

Internal reporting tools like Stats Digest enable those in business and editorial roles to actually leverage their data. At The New York Times, this point is essential. If you’re working to produce high quality news content on a tight schedule, it’s unrealistic that you’ll spend much time digging through tables of numbers to find the important ones. It’s not that the statistics aren’t important. If a newsletter receives an unusually high number of opens or precedes a massive exodus of email subscribers, you probably want to think hard about the cause. But how would you learn of such a change? There needs to be less overhead for those hoping to use data insights without spending all their time analyzing raw numbers.

Read more...

Searching for Feelings ? An Intern Works on Topic and Sentiment Analysis

Despite the prestige of an internship with The New York Times, when I signed on to do software development with the Search team ? before I ever stepped in the building ? I assumed I’d be working on small projects, fixing bugs that needed to be fixed, and essentially coding only in whatever spare quantity was required. I was perfectly happy to do so, of course, but I was quite prepared to receive piecemeal assignments for completion in small fragments of time.

As things turned out, I’m writing this while my code is compiling. It’s been about 25 minutes so far. As orientation to the internship program was phasing out at the beginning of June, I was greeted by the Search team and almost immediately given a swath of personal projects to choose from ? individually-directed, team-assisted tasks that I could spend the full ten-week period of the internship on, with the end goal of leaving the team (and indeed, The Times at large) with a complete tool they could actually integrate and use in the search engine.

For many years, I’ve had a pretty serious interest in linguistics, but I never had the chance to do much with it save for taking a few linguistics classes at college. I always wondered if I could apply the skills I was learning as a computer science major to my academic hobby, but the two fields never seemed to cross while I was at school. So, when I was asked which of a selection of projects I wanted to pursue this summer, one prompt ? linguistic sentiment analysis of articles’ subjects ? stood out.

That’s how I came to be waiting 25 minutes for my code to compile. The initial idea for the semantic analyser has come to fruition through the use of already-extant linguistic tools. The Search team suggested exploring Google’s recently released SyntaxNet parser ? a neural network pre-trained on a massive syntactic corpus which can read new sentences and break them down into their constituent parts and then explain exactly how the constituents are related. Additionally, the project uses vaderSentiment , a Python tool out of Georgia Tech which ? based on website comments and user posts on Twitter ? can determine with accuracy what the overall sentiment (positive or negative) of a snippet of text is.

Read more...

Summer Intern Report: Prototyping an Improved Search Query With Machine Learning

I was thrilled when I was offered the summer internship for two specific reasons. First, it was with The New York Times and second, I was going to join the Search team. I was looking for an opportunity that intersected with my interests in information retrieval and machine learning. More importantly, I would have a chance to use what I had been learning in grad school.

Before starting, I had several assumptions about the internship: I expected that it would be a great learning experience. I would also learn how search systems are structured in a professional setup and what exactly the Search team does everyday. I learned the high level architecture during the first week, and it was way more complicated than I expected.

Over time, I learned about several of its components in more depth. By attending daily stand-ups where each team member reviewed what they were going to do that day, I got a detailed idea of their work in the team.

Another assumption was that I would be a part of a project going on in the Search team and my work would be limited to a small component of it, but I was wrong. It was made clear that I was going to have my own project and I was given full freedom to decide my own project! The team gave me some awesome ideas, and after brainstorming for about a week I decided on my project: to increase the relevancy of article search results from The New York Times search engine.

Sometimes it is difficult to construct a query for the information you are looking for for a multitude of reasons. For instance, if you’re looking for context for a specific term, or missing a particular word entirely, using the search engine can be a hassle. What you have in mind is a vague idea and some generic words, but you are looking for specific information. Since the basic text search algorithms use the query you’ve typed ? verbatim ? to fetch and rank documents, the results sometimes are not satisfactory, or may be entirely irrelevant to what you are looking for.

Read more...

Design Thinking for Media That Matters

At the end of May, eight New York Times product managers and engineers participated in Matter ’s media startup accelerator bootcamp. Over four strenuous days, we learned the process of design thinking: a human-centered, prototype-driven approach to creating something that fills a meaningful need in someone’s life.

The bootcamp was a chance to collaborate with some of Matter’s other media partners: my team for the week included a photo editor from the Associated Press, the public editor of the Kansas City Star, and the CEO of PRX. Even though we all had vastly different professional backgrounds, the design thinking exercises we learned allowed us to start working as a productive team quickly.

Of all the techniques we learned in the course of the week, I want to focus specifically on some that can be applied to engineers working on cross-functional product teams.

Engineers are often partially immersed in product development, but writing code doesn’t feel like a user-focused task compared to a role like design or product management. Design thinking encourages the sort of radical collaboration that values diverse skill sets regardless of specialization.

As an early part of the design thinking process, we ventured onto the streets of Manhattan to talk to potential users and learn about their needs. We set aside our assumptions about what we expected to find and started by simply asking questions, listening, and then looking for patterns and insights in what we heard. Our subsequent ideation sessions and prototype development were informed by the perspective we’d gained.

This type of user empathy can help with technical decision-making as well. Performance considerations and other technical details that are delightful parts of a good user experience gain traction from an empathetic viewpoint. Plus, basing as many decisions as possible on feedback from real (or potential) users helps not only to ensure the success of the product but to prevent any individual’s preferences or biases from arbitrarily influencing it.

In our bootcamp ideation sessions, we set explicit norms around generating at least a hundred ideas and building on others’ suggestions rather than critiquing them. Once we had a hundred options to choose from, we concluded our brainstorming and voted on two or three we found most exciting and meaningful.

Read more...

Girls Who Code Visit The New York Times

Girls Who Code students develop a paper prototype during their field trip to The New York Times.
Girls Who Code students develop paper prototypes during their New York Times field trip. Sarah Bures/The New York Times

A sea of excited young female faces. A crowded room of high school students fidgeting and waiting expectantly. Taking selfies and snaps from the moment they entered the room. But this is not a gaggle of adolescent fans waiting for a Zayn Malik concert to begin. These are young, ambitious girls exploring the possibility of joining the ever-expanding tech industry.

The New York Times Technology department, in partnership with Girls Who Code, hosted “The New York Times: Reporting Online and Around the World.” The July 28 event gave the 90 10th and 11th grade participants a glimpse into the inner workings of the newsroom, the technology group, and the product development teams of one of the best known newspapers in the world. The girls were shown the roles that technology, innovation and collaboration play in our multiplatform organization.

“We are thrilled to welcome Girls Who Code and the next generation of female tech leaders at The Times,” said Erin Grau, Vice president of operations at The Times and co-chair of the Women’s Network. ”We are so inspired by the work of Girls Who Code, an organization who shares our commitment to closing the gender gap in technology.”

Girls Who Code is a national non-profit organization that encourages girls to get into coding and development. Many of the 15- to 17-year olds were already passionate about going to college to study computer science. They were enthused by the program but complained that their schools either don’t offer any programming or computer orientated classes or if they do, that they tend to be male dominated. Development and coding wasn’t considered a future option until Girls Who Code came into their lives. One girl went as far as to say “coding is easy, you just need to know the language and practice.” This is the kind of confidence that programs like Girls Who Code were set up to instill.

Carrie Price, one of the coordinators of the event and a software engineer at The Times, credited her confidence and determination to pursue a career within the tech sector to a similarly early exposure to coding and development.

Read more...

The Future of the Past: Modernizing the New York Times Archive

The New York Times recently celebrated its 20th year on the web. Of course, today’s digital platforms differ drastically from those of decades past, and this makes it imperative that we modernize the presentation of archival data.

In 2014, we launched a redesign of our entire digital platform that gave readers a more modern, fluid, and mobile-friendly experience through improvements such as faster performance, responsive layouts, and dynamic page rendering. While our new design upgraded reader experience for new articles, engineering and resource challenges prevented us from migrating previously published articles into this new design.
The new and old versions of an NYTimes article side-by-side.

Today we are thrilled to announce that, thanks to a cross-team migration effort, nearly every article published since 2004 is available to our readers in the new and improved design.

As so often happens, the seemingly ordinary task of content migration quickly ballooned into a complex project involving a number of technical challenges. Turns out, converting the approximately 14 million articles published between 1851?2006 into a format compatible with our current CMS and reader experiences was not so straightforward.

Challenge Accepted

At first, the problem seemed simple: we had an archive of XML, and we needed to convert it into a JSON format that our CMS could ingest. For most of our archive data, from 1851 ? 1980, the XML files included sufficient data and all we needed to do was parse the XML and rewrite it in the new format.

Stories from 1981 through 2006 were trickier. We compared the articles parsed from XML to a sample of articles currently served on the website and found that in 2004 alone there were more than 60,000 articles on our website that were not included in the XML archive. From 1981 onward, there were possibly hundreds of thousands of online-only articles missing from the archive, which reflected only what appeared in the print edition. This posed a problem because missing articles would show up as 404 Not Found pages, which would deteriorate user experience and damage our ranking on search engines.

Creating the Definitive List of Articles

Read more...

Why The New York Times is Working With Matter

For years I’ve followed the progress of Matter Ventures, the San Francisco-based media accelerator run by Corey Ford. I can no longer remember exactly how I was introduced to Matter and to Corey, but I do remember the first demo day that I attended a few years ago, at an event space associated with WNYC. Somehow I had gotten an invitation, but I was on the fence about whether to go. I had recently started in a new role, and was feeling pressed for time. In the end, I went, mostly because an engineering lead I was trying to hire was likely to be there and I was hoping to stalk him in his natural habitat.

When I got there, I recognized a face, then another, and another, and I realized I was walking into a room full of many of the most talented people in digital media in New York, with all sorts of opportunities for stalking talent! And this from an accelerator that was based in San Francisco. When the demos started, and the ideas began to flow, I was sure there was something special going on.

Over the next several years I got to know Corey and his program better. I was more and more impressed. The rigorous application of design thinking, the selectiveness applied to the participating startups, the quality of the ideas and the people, the energy surrounding the whole process, all supported my initial reaction.

I also loved the enthusiasm and optimism around the potential of digital media. Despite the very real disruption of the industry, Matter clearly believes in the potential of media to reinvent itself, evolve and thrive. I do too.

However, it was never practical for my New York-based organization to actually work with Matter because they ran their program in San Francisco. While I understood the obvious appeal of the Bay Area, I thought it was a shame for Matter’s presence in New York to be limited to the demo day ? a missed opportunity for both Matter and the city that is the undisputed media capital of the world.

So I couldn’t be more pleased that Matter is finally launching a class in New York ? with the participation of The New York Times and the support of the Google News Lab. I’m so happy to be able to offer the team here at The Times the opportunity to work alongside the inaugural Matter NYC class. And as a veteran of the first wave of the Silicon Alley startup scene back in the ’90s, I am thrilled to play a small role in injecting this wonderful ingredient into what is now an incredibly vibrant tech/media/startup scene in New York.

Read more...

Our Tagged Ingredients Data is Now on GitHub

Since publishing our post about “ Extracting Structured Data From Recipes Using Conditional Random Fields ,” we’ve received a tremendous number of requests to release the data and our code. Today, we’re excited to release the roughly 180,000 labeled ingredient phrases that we used to train our machine learning model.

You can find the data and code in the ingredient-phrase-tagger GitHub repo. Instructions are in the README and the raw data is in nyt-ingredients-snapshot-2015.csv .

There are some things to be aware of before using this data:

  1. The ingredient phrases have been manually annotated by people hired by The New York Times, whose efforts were instrumental in making the success of our model possible.
  2. The data can be inconsistent and incomplete. But what it lacks in quality, it makes up for in quantity.
  3. There is not a tag for every word and there are sometimes multiple tags per word.
  4. We have spent little time optimizing the conditional random fields (CRF) features and settings because the initial results met our accuracy needs. We would love to receive pull requests to increase the accuracy further.

Examples

INPUT NAME QUANTITY UNIT COMMENT
1 6-inch white-corn tortilla white-corn tortilla 1.0 6-inch
3 cups seedless grapes, equal amounts of red and green grapes grapes 3.0 cup seedless, equal amounts of red and green
1/4 cup good quality olive oil good quality olive oil 0.25 cup
3 large cloves garlic, smashed garlic 3.0 clove smashed
Rind from 1/2 pound salt pork salt pork 0.5 pound Rind from

Learning and Exploring on 100% Day

When you hear “hackathon” you envision a scene from “The Social Network”: bleary-eyed developers working into the early morning, slamming energy drinks, furiously typing away on keyboards; the end goal being to best your competition, and be showered in glory from your peers. When I joined The New York Times two years ago, I assumed my first 100% Day would be similar; nothing could be further from the truth.

The Times periodically hosts an internal 100% Day. A typical 100% Day, or “Maker Day” at The Times fosters a spirit of collaboration and personal development. We use this time to better ourselves, to learn something new, or build something that we may be interested in. It’s a time where we can dig in, hang out with other members of the organization, and just learn. At the end of the day, we share whatever knowledge we’ve gained, or things we’ve built, with the rest of the company.

The March version of 100% Day was no different. Inspired by the work of Lara Hogan , I spent most of my day investigating ways to boost the speed of our current site. I dug into a tool called vmprobe and found ways to optimize our autoscaling efforts on the Real Estate section.

There were dozens of talks, ranging from researching findings to full blown demos. Here are some that I personally found interesting:

NYT Reactions: Jared McDonald, Jeff Sisson and Angel Santiago, from Technology, built a system to allow emoji reactions to articles. The goal was to create a mechanism for user feedback that’s more at ease in a mobile setting, and which could attract a reader that would not otherwise be comfortable composing a fully fledged comment. The system was designed to be flexible enough to accept a range of emotional reactions; so anything from “Recommend” to
” is possible.

Lunchbot: Chris Ladd, from Digital Design, built a Slack bot to respond to the question: “What’s on the menu in the cafe on the 14th floor?” Chris demonstrated his code live on stage, and integrated the bot into several NYT Slack channels. He has also graciously open sourced his code. If you’re so inclined, you can check out the code on GitHub .

Read more...

Advertisement