Jeff Blankenburg spent most of his career as a maker. He’s been involved in both strategy and software—working on everything from building a site for Victoria’s Secret to building user interfaces for technical startups. It wasn’t until a technical evangelist job offer from Microsoft in 2007 that he realized he could make an entire career around speaking—and that is what he continues to do at Amazon, working with the Alexa service.
Blankenburg led a session on building a simple skill for Alexa that allowed users to have a back-and-forth dialogue with the machine, based on scripts from participants’ favorite movies or television shows. He was inspired to make this project after helping his daughter rehearse for one of her school plays; as he recited each line back to her, he realized this was a fun project Alexa could assist with.
With the help of AWS Lambda and one of the many code templates available on the Alexa Github, Blankenburg showed the group how to create a function that allows users to set unique Alexa reactions to different “utterances” — the actual prompts that users say to the device.
For this project, the utterances were lines of movie or show dialogue. When Alexa processed them, using a combination of automatic speed recognition and natural language understanding, the device would read back the next line in the movie or show.
One question Blankenburg said he always tries to tackle in his work is how to make these voice systems feel intuitive and natural: “How do you make it feel as much as a human being as possible?” Participants accomplished this in their skill sets during the session by incorporating speech synthesis markup language to make the dialogue with the machine feel similar to the movie or TV show. Aidan Feay was able to add a little extra twang to Alexa as she read his Seinfeld script—each detail making this machine sound more like a real person.
After the session, participants were able to walk away with new skills—both for themselves, and their Alexa devices.