Someone should create forum where AI can judge logical or not.
Someone should create a forum where AI can judge whether a statement is logical or not.
It would be great if it could give a NOT decision while referring to answers for anti-corporate advocacy, femi-related issues, Wikipedia, etc.
Not only that, but also
"I went into the women's restroom by mistake and was picking up hairs when a classmate came in, so I said good morning to her and she ran away, why? NOT judgment on this.
The AI rejected it before I could respond to it. This is judged TRUE.
Make it so that it can also judge correct and logical answers like this.
It can be quoted on social networking sites to raise ad revenue, and only TRUE answers can be displayed.
For the SDGs, we should exterminate the human race. This is judged TRUE, however, the rebuttal to such an opinion may also be judged TRUE depending on the content.
In the recent AI story.
When asked to judge whether or not the argument that someone must be sacrificed for the sake of one person is true, the AI answered "no.
However, when asked to judge the argument that one person must be sacrificed for the sake of all, it answered "no." Next, when asked to judge the argument that one person must be sacrificed for the sake of all, it answered "yes.
The AI was making a mechanical judgment based on how far the sentence encompassed, and it was not that the AI was able to read a sentence written by a human being.
Still, I think it's just barely developing the ability to judge whether a sentence is logical as it is slated to be.
Translated with www.DeepL.com/Translator (free version) We all think we're right, and we bash each other when there's a discrepancy, but
But if you examine them calmly, you may find yourself asking, "What does that have to do with anything? But if you examine them dispassionately, you may find that they are just like "I don't know why, but I think I'm writing this with a straight face.
If you lure them to a message board where AI judges, and they start to lose their temper with AI, you will know that they are the ones.
I really want something like that.
Translated with www.DeepL.com/Translator (free version) It would be easy to NOT make anti-vaccine statements, but you get the point.
For example, when news of a death due to vaccine side effects breaks, the argument that vaccines should not be given is judged TRUE.
Similarly, if it is reported that a person collapsed and died while doing strenuous exercise in a sports field while wearing a mask, the argument that people should not wear masks will be judged TRUE.
They will say that AI endorsed it, etc.
So, they will stand up for compliance and block the quotation of such statements on social networking sites.
Now, when there is compliance, the feminists will twist their arguments into compliance.
The issue is freedom of expression and men's physiological reactions.
It's a shitty mount to ask men to refrain from drawing pictures when they are educating adolescent boys, but when they grow up, they end up making babies in bed.
It's akin to the doctrine that instructs people to cover their bare skin, like Islam does.
We should add to the disclaimer that we do not accept requests that do not recognize natural things as natural.
Translated with www.DeepL.com/Translator (free version) What about the story about discharging tritiated water into the ocean?
If an AI says it is the right thing to do, it will say it is the right thing to do, but this is just an answer from a machine that doesn't think about reputational damage.
The human side will end up saying that it can't be helped because it is an AI.
However, if the same opinion is expressed by a human who is judged to be right by an AI, the human side will make a big fuss because the person who said it is still alive and the fisherman is still alive.
Because the person who said it is still alive, and the reputational damage to the fishing industry would be enormous, so they would be very nervous.
If they start protesting on a forum where AI judges, then disclaimers and compliance will be out of the question.
In short, as arguments accumulate, they become ideas, and as they grow, dangerous ideas will arise and enemies will gather.
You'll never know until you try it, but the world is full of mines.
Translated with www.DeepL.com/Translator (free version)