From Paul's Security Weekly
Recorded on April 2, 2019 at G-Unit Studios in Rhode Island!
- Register for our upcoming webcasts with Recorded Future by going to securityweekly.com/webcasts . If you have missed any of our previously recorded webcasts, you can find them at securityweekly.com/ondemand.
- We just released our 2019 Security Weekly 25 Index Survey. Please go to securityweekly.com and click the Survey link to help us understand who’s evaluating, using, or formerly used any of the Security Weekly 25 companies. The results will be summarized and presented back to all responders in a private webcast.
Topic: HateStream: Social Media in the Wake of the Christchurch Attacks
- A long time ago, well not so long, the news had this guy named Walter Cronkite and everyone trusted him. If he said "This is bad" well it was probably bad. Today, if I there is an event like say the shooting in New Zealand, there is still that news component, but now there are no controls. Seriously. In 1968, the news decided how much of something you got to see and maybe what you shouldn't/didn't see. Today, that sort of control is slipping away. Think about it. In the New Zealand shooting recently, the suspect put a video up of the massacre and tried to live stream it for about 17 minutes. Facebook didn't do anything for a while because they they have a "suicide button" (which I have been afraid to press) but not a "murder" or "genocide" button. So, it seems that we have arrived at the point where we are streaming murders, suicides, and other horrific acts. How long will it be before they need a "torture" button? Probably already do.
- We also are increasingly susceptible to what some people call "fake news" or "trolling". Let's talk about trolling a little bit. I used to troll people a lot on Sierra Online and Compuserv because it seemed amusing. My friend and I would start ridiculous arguments (like he would say "Abraham Lincoln had this thing about nose hair" and I would start accusing him of being anti patriotic and we would just basically attack each other until other people joined in. It was silly but it is awfully reflective of today. Today, we start the Abraham Lincoln nose hair fetish rumor and it gets retweeted, two hours later, it's all over the world. The same goes with all sorts of recruiting and misinformation with racist stuff, anti this and that, misogyny, in fact all the ologies, you name it. What is the responsibility of social media?
- Gizmodo had a story this week where Facebook changed the title of the Vice President of Product Management to Vice President of Integrity and then went through and changed all references to this Product Management person to Integrity. This apparently had something to do with the NZED shootings and Facebook not having a way to report massacre content.
- There is also the matter of the 70 page manifesto this white supremacist murderer wrote. Should facebook take it down? Should they really? I don't know. Let's talk a little bit about the practical and ethical piece of that.
- So practically, in 2016 we averaged 500 million tweets per day. That's about 350,000 tweets per minute. Who exactly is monitoring this and how many would it take? What kind of training should we provide to those people. Is it ok to say the N word? How about the C word or the B word? The D word? Ok, so let's just say that Carlin's 7 words you can't say on television would suffice. Our ML can evaluate it and boom, your blocked. What about pictures? Nudity, full frontal, full backal, above the neck? Well, ML is pretty good at that too. But what are the rules. I bet we couldn't agree on that in a million years.
- In the ethics dimension, how about things that ML is not good at doing? Political manifestos? Russia just passed a law that allows you to be imprisoned for criticizing the government. Yikes. Add that to the ML. What about recruiting people to join my new Cult. We are going to deify Russ and make him our golden god who will guide us to the promised land in giant trapezoidal ships. Is that ok? Flat Earth anyone? Anti Vax? Where does it stop? Maybe we build an AI, called Walter Cronkite, that will just tell us what we should know and leave the rest out. Is Facebook responsible for that? Should they be? I am scared of Zuck telling me what to think to start with. Really scary kids.
- If social media is going to be given that amount of power, I bet they start abusing it pretty fast as well. Maybe it just turns into Caveat Emptor all the way. We have to learn to police ourselves and be smarter. Don't watch murder videos. Don't watch rape videos. If no one looks at them they go away pretty fast. But what should we do? Talk about Freedom vs Security for a while.
- So, how far should this go? What's allowed? Will our AI overlords decide or should we just use ML to filter out the M Word (I don't know what that is but someone probably doesn't like it when I say moist or let it be a free for all with the marketplace deciding what is too much. How many warning buttons should be on facebook? When should you press them. How much murder is too much murder? Starting to sound like a Vonnegut novel at this point (only not as well written). But in that same light? Welcome to the Monkey House. Harrison Bergeron was a story where everyone was required by law to be equal. This was enforced pretty harshly so I don't know.
- Black Mirror -- White Bear
- The Truman Show
- Harrison Bergeron -- Kurt Vonnegut, Jr.