Microsoft got a patent to turn it into a chatbot

Illustration for the article titled Microsoft got a patent to turn it into a chatbot

Photograph: Stan Honda (Getty Images)

And if the most significant measure of your life’s work has nothing to do with your experiences, but only with your unintended generation of a realistic digital clone of yourself, a specimen of ancient man for the amusement of the people of the year 4500 , long after you broke this deadly coil? This is the least terrible question raised by a Microsoft patent recently granted for an individual chatbot.

First noticed by the Independent, The United States Patent and Trademark Office confirmed to Gizmodo by email that Microsoft is not yet allowed to make, use or sell the technology, just to prevent third parties from doing so. The patent application was filed in 2017, but was just approved last month.

Hypothetical chatbot you (imagined in detail here) would be trained in “social data”, which include public posts, private messages, voice and video recordings. It can take the form of 2D or 3D. It can be a “past or present entity”; a “friend, a relative, an acquaintance, [ah!] a celebrity, a fictional character, a historical figure ”and, threateningly,“ a random entity ”. (The latter, we may suppose, may be a spoken version of the photorealistic library of machine-generated portraits ThisPersonDoesNotExist.) Technology can allow you to record yourself at a “certain stage of life” to communicate with you in the future.

I personally appreciate the fact that my chatbot would be useless thanks to my limited text vocabulary (“omg” “OMG” “OMG HAHAHAHA”), but the minds of Microsoft have considered this. The chatbot can form opinions you don’t have and answer questions you never asked. Or, in Microsoft’s words, “one or more chat data stores and / or APIs can be used to answer user dialogue and / or questions for which social data does not provide data”. Fill-in comments can be guessed using crowdsourcing data from people with aligned interests and opinions or demographic information such as gender, education, marital status and income level. He can imagine his opinion of a problem based on “crowd-based perceptions” of events. “Psychographic data” is on the list.

In short, we are looking at Frankenstein’s machine learning monster, reviving the dead through highly unverified, highly personal data collection.

“This is scary,” Jennifer Rothman, a law professor at the University of Pennsylvania and author of The right to publicity: privacy reinvented for a public world told Gizmodo via email. If it serves as a guarantee, such a project sounds like a legal agony. She predicted that such technology could attract disputes over the right to privacy, the right to publicity, defamation, false wrongdoing, trademark infringement, copyright infringement and false endorsement “to name just a few,” she said. (Arnold Schwarzenegger traced the territory with this head.)

She continued:

It can also violate biometric privacy laws in states, such as Illinois, that have them. Assuming that the collection and use of data is authorized and that people positively choose to create a chatbot in their own image, the technology still raises concerns if these chatbots are not clearly marked as counterfeiters. One can also imagine a series of technology abuses similar to those we see with the use of deepfake technology – it’s probably not what Microsoft would have planned, but it can nevertheless be anticipated. Convincing but unauthorized chatbot can create national security problems if a chatbot, for example, is supposed to be speaking on behalf of the president. And you can imagine that chatbots from unauthorized celebrities can proliferate in ways that can be sexually or commercially exploitative.

Rothman noted that while we have realistic puppets (deepfakes, for example), this patent is the first she has seen that combines this technology with data collected through social media. There are a few ways in which Microsoft can address concerns with varying degrees of realism and clear disclaimers. Incorporating the paper clip as Clippy, she said, may help.

It is unclear what level of consent would be required to compile sufficient data for even the most voluminous digital wax work, and Microsoft did not share the potential user contract guidelines. But other likely laws governing data collection (the California Consumer Privacy Act, the EU’s General Data Protection Regulation) can be an obstacle in chatbot creations. On the other hand, Clearview AI, which notoriously provides facial recognition software to police and private companies, is currently litigating its right to monetize its repository of billions of avatars extracted from public social media profiles without users’ consent.

Lori Andrews, a lawyer who helped inform the guidelines for the use of biotechnologies, imagined an army of rogue evil twins. “If I were running for public office, the chatbot could say something racist as if it were me and destroy my prospects for election,” she said. “The chatbot can gain access to multiple financial accounts or reset my passwords (based on conglomerate information, such as the name of a pet or the mother’s maiden name, which can often be accessed on social networks). A person could be deceived or even harmed if their therapist took a two-week vacation, but a chatbot imitating the therapist continued to provide and charge for services without the patient’s knowledge of the exchange. “

Hopefully, that future will never happen, and Microsoft has offered some recognition that the technology is scary. When asked about the comment, a spokesman directed Gizmodo to a tweet by Tim O’Brien, general manager of AI programs at Microsoft. “I’m looking into it – the application date (April 2017) is prior to the AI ​​ethics reviews we do today (I’m on the panel) and I’m not aware of any plans to build / submit (and yes, it’s disturbing). ”

.Source