Computer systems have always been limited to the quality of input that it receives. In other words, it has always been garbage in, garbage out. No matter how good the quality of the software and the hardware. Feed it with garbage and it spews out garbage.
Now this had me thinking about AI and the perceived dangers that the experts have voiced out over them. Are the dangers partly due that most people that use them aren't that highly intellectual at all and that they'll feed them with more garbage data rather than useful ones. AIs are trained to learn the data input to them but the sad truth is, there are more stupid people than sensible ones.
I've recently tried chatGPT and I let it try to improve on a blog that I wrote before. As expected, it resulted to a piece that is well structured and pleasing to read. What concerned me was that the thought was made completely opposite to what I originally wanted to convey. I am anti-woke and so was the content of my original message. The result was pro-woke and this raised a lot of red flags for me. AIs main weakness would be that if people fill it up with their biased information, then it would think that these patterns of thinking are the norm rather than an exception.
If we think that Google or Facebook as being able to manipulate people to lean more on their biases and cause a divide among people then these new and upcoming AI tools can do more than twice the damage.