Here’s a question for you. How do you make sure to parent your kid’s internet time without invading their privacy? Kidas might have an answer for you, with its ProtectMe software. ProtectMe isn’t just a piece of nanny software, alerting parents to their own kid misbehaving. It also recognizes signs of incoming harassment.
The program uses artificial intelligence to monitor alarming behaviors between kids in over 120 games. ProtectMe specifically looks for context; it can tell the difference between a playful jab and an actual threat.
At the end of every week the software generates a report for the parents. This report alerts them to any concerning behaviors from — or towards — their kid.
It sounds good, in theory. But I had a few questions about the particulars. So, I got to asking a few questions to Kidas boss Ron Kerbs. Just a few questions about how exactly the software works and where it’s going in the future.
How it currently works
GamesBeat: Is ProtectMe limited to voice-chat only?
Ron Kerbs: The ProtectMe software provides both voice and text monitoring. It knows when to turn on and off based on when games or communication apps such as Discord are turned on and off.
GamesBeat: Is it user-only? That is, is the main goal of ProtectMe to observe the child’s outgoing speech, and not incoming speech?
Kerbs: ProtectMe has the ability to understand communication both in and out, meaning we know if a child was exposed to, for example, cyberbullying, as the victim, the perpetrator or a witness. For privacy and compliance reasons, we do not provide gamer tags, personal details or transcripts to parents or guardians. Only in the severest of circumstances will we offer additional context such as if there is clear potential for trafficking or the child was planning to meet a stranger in person, amongst other critical circumstances.
GamesBeat: Can the software deal with audio-splitting? If a microphone is plugged in through an audio mixer, or run through a digital audio cord, can ProtectMe detect the default Windows option isn’t in use and adjust accordingly?
Kerbs: Kidas supports audio splitting; in some instances, we analyze more than eight channels at one time. Our technology can differentiate between relevant data and background audio that we can ignore.
GamesBeat: Can ProtectMe tell the difference between a child actively saying something to someone and reading something out loud? Between another user talking and a voiced cutscene?
Kerbs: What’s novel about our technology is that it understands context as part of the game and goes beyond language processing and keyword recognition. We all know that trash talk is part of the game experience; however, the ProtectMe difference is that because our AI software has been tested against thousands of scenarios, it is able to respond correctly every time and decipher whether an interaction represents a direct threat or good-natured trash talk.
That’s pretty incredible and something that no one else in the market is offering. We also know that, generally, what’s appropriate for say a 7-year-old to be exposed to is quite different than that of a 14-year-old, and thus with the guidance of child psychologists and cyberbullying experts, we provide different threat severity scores and recommendations based on age. If someone uses a voice changer or a recording, we would be listening to the context and words being spoken, not the depth or pitch of the spoken words.
Privacy is still important
GamesBeat: Can ProtectMe recognize repeat voices? If a child routinely teams up with someone who is regularly flagged as a vocal harasser, does Kidas alert parents of who their kid is hanging out with?
Kerbs: No, we are incredibly vigilant with respect to privacy laws and security. We encourage parents who are seeing repeated threats coming through to carry out further discussion within their family. Given that these topics can be incredibly challenging for parents to discuss, we provide personalized recommendations for how they can talk to their children about what has happened.
GamesBeat: Does ProtectMe track and/or provide updates-over-time for parents? For example: If ProtectMe flags a child 5 times in a week and reports that to parents, and the next week ProtectMe flags a child 3 times, is the parent alerted to the better behavior?
Kerbs: We do track week-over-week changes but not quite like that. Our reports start with a general color scoring system to categorize the severity of threats detected that week:
Red: Immediate Action Needed; The most concerning threats such as contact with a pedophile.Orange: Action Needed; For concerning threats such as bullying.
Yellow: Worth Noting; Behavior you should be aware of such as age-appropriate trash talking.
Green: Nothing Concerning: No indication of concerning threats found.
Our reports contain recommendations based on the overall severity score and which threat was detected. When we see repeated exposure to the same threats, a sequence of unique recommendations based on historical threats detected is provided.
From there, we also provide time spent playing analytics. These show what games were played the most, how many hours were played each day and in total for the week, how this compares to last week and how the total time spent playing compares to all ProtectMe gamers. We are screen-time advocates and want to keep gaming safe and fun; however, parents have told us they never know how much their kids are actually playing and how this compares to other gamers. So now, they have even more insight than without the software.
What the future looks like
GamesBeat: Will ProtectMe be able to someday monitor game-specific harassment? (eg: a user who spams pre-setchat functions in Rocket League, like ”what a save!”; over and over, to mock a player)
Kerbs: This is not a current offering, however, we have plans to expand what our analysis looks at including game-specific harassment. For example, as part of our partnership with Overwolf, we use Overwolf’s APIs to get access to game-specific events to detect those situations.
GamesBeat: Is more in-depth tracking in the future for ProtectMe? (eg, tea-bagging, counter-strike sprays, etc.)
Kerbs: Although our current focus is on audio and text chats, detecting other toxic events that don’t involve peer-to-peer communication such as teabagging and smurfing are definitely on our roadmap.
We have also partnered with some of the largest game studios and gaming equipment manufacturers to bring them custom solutions that fit their unique technology, storylines and challenges.
The post Kidas’ ProtectMe software is keeping an ear on your kids appeared first on Venture Beat.