In short The artificial intelligence algorithms used by Google Docs to suggest edits to make writing more inclusive have been criticized for being boring.
Language models are used in Google Docs for features like Smart composition; it suggests words to automatically complete sentences as the user types. The chocolate factory now wants to take things a step further and is rolling out “assisted writing,” another AI-powered system designed to help people write more impactful documents faster.
Assisted writing is being introduced enterprise users, and the feature is enabled by default. Not everyone likes to be guided by the algorithm, and some people find its “inclusive language” ability irritating, Vice reported.
Words like “police” might prompt the model to suggest that it be replaced with something more neutral, like “police”. It’s understandable, but it can get a bit ridiculous. For example, he suggested replacing the word “owner” with “owner” or “owner”. He doesn’t like either big words as one writer found.
“Aided writing uses language comprehension models, which rely on millions of common expressions and sentences to automatically learn how people communicate. This also means that they may reflect certain human cognitive biases,” said a Google spokesperson told Vice. “Our technology is constantly improving, and we don’t yet (and may never) have a comprehensive solution to identify and mitigate all unwanted word associations and biases.”
Fairness in AI is complicated
As experts strive to create the holy grail of a perfect and unbiased intelligent system, fairness in machine learning models is proving to be a difficult thing to measure and improve upon.
Why? Well, for starters, there are apparently 21 definitions of fairness in academia. Equity means different things to different groups of people. What might be considered fair in computer science may not correspond to what is considered fair, for example, in the social sciences or law.
All this has led to a nightmare for the field of AI, John Basl, a philosopher working at Northeastern University in the United States, Told Vox, adding: “We are currently in a period of crisis, where we do not have the ethical capacity to solve this problem.” Trying to fix equity is difficult, not only because people can’t even agree on what the term means, but because the solutions for one application may not be suitable for another.
It’s not always as simple as ensuring developers train on a more diverse and representative set of data. Sometimes the impacts of an algorithm are different for different social groups. Although there is regulation in some use cases, such as financial algorithms, there is no simple solution to make these models fair.
IBM: Ethics is a major obstacle for companies adopting AI technology
IBM CEO Arvind Krishna rose through the ranks, rising through the ranks for more than 30 years to lead IBM. He has witnessed ups and downs in the tech industry and said that although AI is the future, he is careful to deploy its vast capabilities in the real world. Ah, yeah, that’ll be why Watson wasn’t fully realized.
“We are probably only 10% of the journey in [artificial intelligence],” he mentioned in an interview with the Wall Street Journal. “With the amount of data today, we know that it is impossible for us as humans to process everything. Techniques such as analysis and traditional databases cannot go further .”
“The only technique we know of that can harvest insights from data is artificial intelligence. The consumer kind of adopted it first. The biggest impact will come as companies adopt it. ” But Krisha admitted that companies face hurdles because machine learning models are often biased or the technology is used unfairly.
“We have problems. We have to solve the ethics. We have to make sure that all the mistakes of the past are not repeated. We have to understand the life science of AI. Otherwise, we will create a monster. I am really optimistic that if we pay attention, we can solve all these problems, ”he said. ®