Dangers of AI and Microtargeting

Dangers of AI and Microtargeting 😱 The internet is Fake, and it’s about to get a lot worse! With GPT being an advanced random word guesser the singularity is still ways away. However, the highest risk is their ability to generate fraudulent content. Combine this with the ability to micro target in online ads and you could create dangerous misinformation.

Nina Schick an expert on generative AI goes as far as to predict 90% of all online content will be fake by 2025. This is not hard to believe seeing that ChatGPT is the fastest growing consumer application in human history. Kyle Hill, a popular YouTube channel made the calculation that ChatGPT currently puts out as much content as humans have written since the Gutenberg press every two weeks.

Alas, the largest danger we are currently facing is not a science fiction AI takeover but a large scale misuse of the targeted spread of misinformation generated automatically.

Let me provide you with two examples, one a matter of safety the other commercial:

🔸 1. Let’s say an individual suffering from loneliness might come across an online posts aiming to radicalize impressionable young adults. This post is fake but written in a convincing manner and providing a large amount of seemingly credible sources.

Nonetheless, all of this content including these credible sources have been fabricated to provide the malicious actor with an enormous amount of content to coerce the vulnerable young adult into believing their false information. Statistically, if the target audience is large enough a certain percentage will radicalize and this way a terrorist group could source new members.

Imagine indoctrination with precise targeting and quantitative vastness we have never experienced before!

🔸 2. Alright, the above might be somewhat apocalyptic for some so the following example is more commercial.

Imagine we have company ABC and they have produced a new type of building material which after a certain period of time has some evidence to be carcinogenic. Company ABC could create a virtually unlimited amount of blogs, articles, news websites and more stating the known safety of their building material.

This way company ABC will be able to overpower any negative search prompt in all search engines by simply taking into account the search term someone interested in the health risks for their product would use. Based on how current search engines work their would be no method of guaranteeing the truthful information surfaces at all. Let alone in the first few results.

Furthermore, using microtargeting they could sway the opinion of influential target groups in their favor. With the wealth of online content to “back up” their claims. If and when these individuals do their own research.

🔑Let me end on a few key take-aways.

The importance of finding and checking sources will keep rise to unimaginable levels. Furthermore, Marketers, AI programmers, News Websites, Social Media platforms and search engines should do their ethical part in ensuring the content they claim as authentic actually is.

However, due to the increasing difficulty in determining between real and AI content and widespread freedom of speech implications this will provide incredibly tough.

Even if silicon valley AI giants, search engines and social media platforms implement strict alignment and safety protocols for their systems. The moment a entity, corporate or otherwise, creates their generative AI. These safety protocols and/or laws are out the window.

This makes the false information hazard almost unavoidable. We can only collectively prevent this by teaching everyone how to recognize credible sources.
Learn more about the Dangers of AI and Microtargeting over at https://newage.media/learn

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Other COntent You Might Like: