by Dwayne Phillips
We move forward to the 1980s when the lawyers prevented artificial intelligence (AI) from helping us do our jobs.
Note: This post is about legal maneuvers that prevent helpful AI systems. It is not about AI systems that were built poorly and mimic some human tendencies to discriminate against persons illegally. Those poorly made systems should be banned until fixed.
In the 80s, AI technology produced “expert systems.” These systems held the rules of thumb that experts used to do their jobs. The expert systems never had bad days and didn’t forget things. The expert systems performed better than humans and would have been great helpers in many fields (think lessening deaths in hospitals).
One problems was that the users (e.g., hospital administrators) would be deploying systems that they knew were not 100%. They would be liable to the lawyers, so they canned them and stayed with people only. Note, the people were correct about 80% of the time. The expert systems were correct about 90% of the time. Why? People have bad days. Lack of sleep or indigestion or other slings and arrows of the day cause otherwise knowledgeable persons to forget this or that on that or this day. The expert systems didn’t have bad days.
We are back. The lawyers have come around to understanding the recent generation of AI and how it works and what it does and that it too is not correct 100% of the time. Hence, users would be deploying systems that they knew were not 100%. They would be liable to the lawyers, so users are discarding them and staying with people only.
For some reason, we don’t deploy AI as “advisors” instead of “deciders.” Just like the expert systems of 1980s, we can deploy the most recent generation of AI as advisors. The final decision is left to humans. The AI can nudge the human with a, “Hey, think about this. Remember that?”
Well, let’s see what happens this time around.
0 responses so far ↓
There are no comments yet...Kick things off by filling out the form below.
Leave a Comment