DeepText

Facebook announced that they are introducing a new AI (artificial intelligence) algorithm that will be able to read your posts as well as a human. Their goal in doing this is to provide better understand the intentions of your posts and then use that information to create better targeted ads and services. Say for example you post “I am hangry and need a pizza before I explode!” Facebook’s AI algorithm would be able to read that as “This person is hungry, agitated, and likely wants to get food as fast as humanely possible. I should show them ads for pizza joints near them that deliver or are within walking distance.”

 

From a marketing perspective, this is fantastic. It allows Facebook to not only look for keywords, but to understand the context in which they are used. As a result, advertisers can feel more confident that the ads they pay for are being shown only to the people that are likely to click on them. This in turn means more companies are comfortable paying for ads and Facebook can charge more for them. Win win. Right?

Social Network Ethics

I would argue that Facebook’s new AI technology is not a positive thing for consumers. There is a very obvious disconnect between Facebook’s interests and consumer’s interests. Time and time again, Facebook has shown that they care more about their own interests than those of the consumer. You may have heard of some of these, but likely not all of them.

 

In December of 2009, the debate on Facebook’s ethics started to take off. Facebook’s team decided that they were going to change privacy settings in such a way that every user’s photos, friends lists, and posts were made public by default and some of them could not be changed back to private. The end result is that the private information of Facebook’s users was thrown out into the open for all to read and see. After a major public outcry, Facebook decided to rescind the changes and introduce more security measures. While this can be viewed as an innocent “oops” moment or that the company simply didn’t understand their user base, it is actually more serious. When data becomes public, it is accessibly by anyone who might try to collect it. This include hackers, scammers, stalkers, and individuals and companies who collect and sell data. It may seem as though a selfie is somewhat harmless, but it contains a lot of information including where the person spends their time, who they spend it with, hobbies and political affiliations, and more. That information can be extremely dangerous if in the wrong hands. And with a few simple scripts, it is blazingly easy to collect that information from a very large number of profiles.

 

In 2010, Facebook ran an experiment with the University of California in which they manipulated people’s voting habits. They took a sample of 61 million people, gave them a way to share with their friends that they voted and found that when people saw varying numbers of their friends vote, they were between 1% and 10% more likely to vote. They did this by  While studying how social media affects the political landscape is not necessarily a bad thing, the fact that they experimented on people without their consent in such a way that could possibly change the political landscape was not okay. The argument could be put forth that Facebook used a randomized trial and didn’t pick any particular political party or group to target such that they would change the results of an election. While that sounds all fine and dandy, the largest number of high-frequency, invested users fall into the age group of under 30. Given that most Millennials tend to vote more liberal (which isn’t a bad thing) and Facebook’s main user-base is in that group, they did sway the political landscape.

 

In edition to the above, Facebook has been doing a few things you may have already noticed. Namely, it tries to curate your content and chooses for you what you see. If you have more than an handful of friends, you will have likely noticed that Facebook will only show you a few of them at a time. If you don’t interact with someone for a long time (liking their posts, personal messages, ect), the site will not show you their status updates and content anymore. The idea behind this is that Facebook wants to show you only the information you want to see in order to keep you addicted to the service and visiting more often. In other words, they want to show you what it thinks YOU will think is most pleasant so you keep coming back and racking up ad revenue.

 

Another fun thing that Facebook has been doing of late is skimming your mobile device for information about you in order to further its interests. While I had seen the privacy permissions the apps request, I never really noticed this until Facebook started showing me friend recommendations from professional clients I had with zero shared friends, likes, companies, ect. What it was doing was looking through my phone and texting history and using that information to find out who I was talking to. I can’t say I’m exactly thrilled about this. What Facebook tells you is that it is doing so in order to make interacting with people easier and the service simpler. Sounds great until it isn’t. If that information gets into the wrong hands, be it a hacker from a foreign country or possibly your own country if you live in a sensitive area, you could be in trouble.

How DeepText Could Be Misused

(For legal reasons, I want to point out here that I am in no way telling you that Facebook WILL do these things, but it is possible and not far that far off what they have done or are doing.)

 

Context-based content recognition. This is what we are talking about here. We are talking about Facebook being able to look at your posts beyond the simple, searchable keywords you might use. It would become unbelievably simple to figure out how people are feeling, what they are thinking, what they are planning, dreaming, scheming, inventing, wanting…..at a human level. With this technology, you are given the ability to do two things: 1) Research what it is people are thinking and feeling and 2) Use that information to manipulate them.

 

The first part of that – research what people are thinking and feeling – is VERY easily misused. From the advertisers perspective, it would become VERY easy to find people who are easy targets. Sell alcohol? Find people who are depressed or stressed out. Sell face creams and makeup? Find people who are insecure about their appearance. Weight-loss pills, alcohol, drugs, promises of friends and fun, health, religion….the list goes on. What Facebook’s new system can do is find people who are vulnerable for whatever reason and be able to prey on them. To sell them things they don’t need or might actually be dangerous for them. While Facebook could *try* to get around this by placing ad guidelines and restrictions, it doesn’t take a genius to figure out ways around that by selling sister products on the same site.

 

Such a system could also easily be used for more “research” that Facebook loves to do and talked about before in this article. They would be very easily be able to change their site, curate content, show certain friends, etc to change how people think and feel. Should Facebook decide that they want to sway an election in a certain way, they could simply show you posts that fit with their plans for the time. Should Facebook want to change public opinion on a divisive topic, they could. Should Facebook want to hide things like their own shortcomings, they could.

 

What I am getting at is that Facebook, for good or bad, has manipulated people in the past to think and feel certain ways. They have manipulated people to act certain ways. They love to use psychological tricks to keep you invested and addicted to their site. What this technology can do is make them even more powerful, and I can’t say I support that. I’ve given them many, many years of my patronage, and I am done.

 

So, with this, ladies and gentlemen, I end my relations with Facebook. No more posts, pictures, ads, or anything else. I am done.