National Australia BankYourself
We created an interactive, voice-activated experience to help people start to reflect on what they really want in life. Combining the latest technology for facial analysis and speech recognition, the web experience allows users to create art through the use of technology.
The end result, as people finish talking to themselves, is a unique visualisation of that person’s real ambitions, transforming them into art through the use of technology.
I've been involved from the project early stages, starting by the concept phase, for which my role was to provide guidance to the creative, while prototyping around feasibility and pushing the boundaries of what was achievable from a technical standpoint.
During the production phase, I've had to coordinate and lead the development team to deliver on the experience, keeping in mind security and scalability during the whole process.
First, the user’s face is scanned. A sophisticated biometrical analysis then generates an illustrated version of the person’s features to be used as a blank canvas for the rest of the experience.
The second step is to start talking to it. As people speak to the illustration, their voice is processed using speech analysis and natural language processing, understanding what they’re saying and transforming the main themes into illustrated patterns that fill up the different parts of the face in real time.
This voice-activated, AI-powered and highly personalised portrait is ready to be saved and shared.
The experience also gives NAB the ability – with the consent of users – to collect data and possibly even reach out to customers with support to make their dreams happen.
Biometrical analysis is built upon Amazon Rekognition and a set of custom algorithms, able to extract relevant information, balance the results based on facial expressions, and adjust properties ratio to match creative direction for the faces illustrations.
Speech analysis uses both the native Web Speech API, and IBM Watson Cloud services, to convert people’s voices into text transcripts. Then, we analyse the text transcripts in real-time using natural language processing, to extract main themes and metrics, transforming what the person is saying into animated illustrations. Symbols, shapes, positions, and colours of the patterns that fills up the different part of illustrated faces, are driven by voice data.
Designed to be highly available and scalable, the infrastructure fully relies on AWS services. All back-end computing logic is built Serverless, using full power of ephemeral containers and functions.
The web experience is built as a Progressive Web Application and is WCAG 2.0 AA compliant. Which makes it fully accessible for auditive, cognitive, physical or visual disabilities.