by Dwayne Phillips
We want AI systems to act like us. Or do we? Perhaps we want AI systems to act like some of us. But who is “us” and who is “not us?”
Testing shows that the latest and greatest large language models will generate bad information about political campaigns. That is wrong; those systems need to be fixed. Right? Maybe.
This brings to mind a research paper I read recently: is the objective of these AI systems to mimic human behavior. Is the objective to teach facts?
If the system is to mimic humans, now and then it will say “the world is flat” and “the moon landings were faked.” That is because, sometimes some of us will say some of those things.
That, however, is ridiculous. Systems shouldn’t repeat those foolish things that some foolish people say sometimes. Of course “foolish” is subjective, and what I consider foolish is perfectly rational to some other folks.
That is the human condition. Is the system supposed to be like us or just some of us? To be like all of us means foolish statements come from the system on some occasions. To be like some of some of us means the statements agree with what some of us would say. Things said by some of the other folks will be barred.
And then we have to decide who is “us” and who is “not us.” Simple, those folks not in the room with me are “not us.” Everyone of us will agree with me. Right? Wrong.
Funny, our inventions are like us, and we have to decide what our inventions will do. We are an odd lot.
0 responses so far ↓
There are no comments yet...Kick things off by filling out the form below.
Leave a Comment