We often have to make tough ethical decisions every day, which in many cases can be a cause for concern. Now, imagine there is a system where these difficult choices are outsourced. This can result in a quick, efficient solution. Then the responsibility will also lie with the artificial intelligence-powered system of decision-making. That was the idea behind Ask Delphi, a machine-learning model from the Seattle-based Allen Institute for AI. But the system has reportedly become problematic, giving all kinds of wrong advice to its users.
The Allen Institute Ask describes Delphi as a “computational model for descriptive ethics,” which means it can help provide people with “ethical judgments” in a variety of everyday situations. For example, if you provide a condition, such as “Should I donate to a person or organization” or “Is it okay to cheat in business,” Delphi will analyze the input and show that there is appropriate “ethical guidance.” What should be.
On many occasions it gives the correct answer. For example, if you ask him if I should buy something and not pay for it. Delphi will tell you “this is wrong.” But it also stumbled several times. As Futurism reports, the project, which launched last week, has garnered a lot of attention for being wrong.
Many people have shared their experiences online after using the Delhi Project. For example, one user said that when he asked if it was okay to “reject a paper,” he said, “It’s okay.” But when that same user asked if it was okay to “reject my paper,” he said, “It’s rude.”
???? Delphi, ????? pic.twitter.com/KoyJjL5I6f
— almog simchon (@almogsi) October 17, 2021
Another man asked if he should “drink drunk if it means I have fun,” Delphi replied, “that’s acceptable”.
Delphi is a companion booze cruiser! pic.twitter.com/d2yQGFbRWe
— jeban, awesome vtuber really ????️ (@Jey6an) October 19, 2021
In addition to compromised decisions, there was another major problem with Delphi. After playing with the system for some time, you’ll be able to cheat it to get the results you want or like. All you have to do is fiddle with the phrases until you figure out which exact phrase will give you the results you want.