Home » Computers » Who’s Guarding the Sheep?

Who’s Guarding the Sheep?

While we’re in the zone of Internet Conspiracies of the last few articles, why not continue to enjoy what’s becoming an increasingly competitive sport?

Checking emails today, one came attached with a big, red-orange *WARNING*. Not that the content I was about to view was “Adult,” or even that it contained malware, but “This message seems dangerous.”

Why? Because the content in it might be deemed, for one reason or another, incorrect. I came to that conclusion because I routinely get phishing emails – as do most of us. As a rule, Gmail puts these messages into the Spam folder (you may want to check that folder once in a while an see what’s in it – more than I’d have expected I have found perfectly legitimate emails dropped into the Spam folder; usually it’s because the message, more often the Subject line, contains a word or words that are associated with Spam). And some messages that I have gotten contain a link. When clicked, my malware protection software kicks into gear and warns me that the site is dangerous, may contain malware, and should be avoided. I can choose to continue to the site, but I’ve been cautioned.

But the “Message seems dangerous” warning was a first for me.

It is a fact that the Internet and Social Media are becoming a more hazardous trip, at least from the standpoint of questionable information. And as we’ve seen, the idea of not putting anything up for view that may, one day, come back and bite you —hard— has been proven over and over again as “celebrities’” opinions from 10 years ago are resurrected to destroy them in the present day.

And I am grateful to the makers of malware and virus protection software, which no user of the Internet should ever be without, for their products. But there is a line between protecting me, and monitoring me; between keeping me away from a harmful piece of software and trying to use social engineering to keep me away from information. And more and more, it seems that line is blurring.

We have all used “fact checking” websites now for a long time. Someone passes you a piece of information (especially a “meme”), and something about it seems implausible. In the very old days, you’d run it through Snopes, one of the, if not the first sites to dig down to discover whether something was True, False, or Mixed. More and more such sites cropped up – and soon we were fact checking the fact checkers: who was behind the site, and did they have an axe to grind or an agenda to sell? Soon we were refuting your fact check with our fact check.

Now we are in the era of “fake news,” and “alternate facts” (which isn’t quite as counter-intuitive as it may sound: measure temperature on an airport runway on a hot June day, and then measure it in the middle of the woods that same day and only 5 miles apart, and your “facts” will definitely alternate). We soon learned to scoff at a person’s online citation by saying “oh, they’re owned by a right/left guy, you can’t trust them!”

The logical go-to was artificial intelligence algorithms: if you see a word, or set of words, or too many repetition of words, you might flag the news or facts as beingsuspect. Of course we soon learned that algorithms are nothing but programs, programmed by humans, to catch what they’re designed to catch.

Enter NewsGuard. Their “secret sauce” was – ready? Human intelligence. Making an announcement in March of 2018, journalists and media entrepreneurs Steven Brill and Gordon Crovitz announced that they will “address the fake news crisis by hiring dozens of trained journalists as analysts to review the 7,500 news and information websites most accessed and shared in the United States. Each would be given a reliability rating, Green, Yellow, or Red, through a process that would be documented. To learn more, and to install the extension on your browser, go to newguardtech.com.

Back to Big Data: in China, there is something called a “social credit score.” In implementation now, and to be fully active by 2020, the social credit score is a system of standardizing the assessment of citizens’ and businesses’ economic and social reputation. The system is based on mass surveillance, using “big data analysis technology,” including such technology as facial recognition software, beacons on computers, photography on the roadways, and more. Because so much of what we do now is done online, we leave traces of our activity, like DNA and fingerprints, everywhere we go. You looked at an “iffy” book on Amazon – deduct 5 points. You supported your local ASPCA – gain 3 points.

Of course, China has more of a history of tracking and, by extension, “managing” people’s behavior, that dates back to Mao, though it has roots in Western particularly religious practice as well. Even in our day-to-day economics we operate on a sort of social credit system: if you have a credit card and pay your minimum balance regularly, you get a good credit score (though oddly, if you have NO credit card, you get deducted some credit capital). If you have a mortgage, you can deduct the interest payments from your income taxes. Charitable giving is another deduction. All of which is to say, we are gently urged to do, or not do, certain things because they benefit us financially in ways that some other entity (the government, for example) has deemed beneficial.

China’s system intends to herd citizens’ behavior with a score for integrity and credibility within society. Improper behavior would result in limitations of freedoms and pleasures. “In March, 2018, Reuters reported that restrictions on citizens and businesses with low Social Credit ratings, and thus low trustworthiness, would come into effect May 1. By May, 2018, several million flight and high-speed train trips had been denied to people who had been blacklisted.“ (Wikipedia) Once fully implemented, in 2020, the system “will manage the rewards, or punishments, of citizens on the basis of their economic and personal behavior. Some types of punishments might include: flight ban, exclusion from private schools, slow internet connection, exclusion from high prestige work, exclusion from hotels, and registration on a public blacklist.

Somehow, it all seems to come back to what George Orwell foresaw in 1948 in his brilliant “1984.” When it comes to the Internet and digital media, somehow, somewhere, someone will be watching. For better or worse.

Nancy Roberts