Khari Johnson May 30, 2017 for VentureBeat
The AI Buddy Project brings together Vidax Center, an Argentina nonprofit focused on child trauma, with creative agencies Clowder Tank and WeBelievers. A project spokesperson declined to share details about specific medical professionals playing a role in the project, but they said psychologists and pediatricians are among the participants.
Social media and text messages from family can be scanned to give the assistant a familiar voice and sound and make the character “a virtual member of the family,” according to a pitch video. Emotional well-being will be scanned with natural language processing made by The AI Buddy Project.
Instead of a single bot, app, or website where kids can interact with AI Buddy, the project wants to partner with platforms kids use to meet the child where they are instead of the child having to come to it.
AI Buddy Project will continue to refine its corpus of questions and examine its own research and thinking to ensure that “the promise of what we’re building doesn’t open doors we didn’t want to open.”
I was initially very curious to see what this service was because I’ve been researching a variety of simulated relationships. As I read about AI Buddy, I thought at first that the concept was successful. But then I watched the promotional video and it raised a lot of concerns for me. The article speaks of not opening doors that need to stay shut, but I wonder if by creating the technology they have lost control of that? This article has been humbling because it has helped me realize that the real need is for human interaction and healthcare workers. AI and technology are not a quick fix for problems that need a human touch.
How can I use technology in a way that fosters human social connection and doesn’t solely rely on automation from a machine disguised as something else? And–where does the line sit between beneficial and harmful in the case of AI?