SDL Episode103

From Paul's Security Weekly
Jump to: navigation, search

Recorded on March 19, 2019 at G-Unit Studios in Rhode Island!

Hosts

  • Russell Beauchemin
    Cybersecurity & Network Security Program Advisor and Director of Instructional Support & Learning Innovation at Roger Williams University.
  • Doug White
    Cybersecurity professor, President of Secure Technology, and Security Weekly network host.
  • Announcements

    • RSA Conference 2019 is coming up March 4 – 8 in San Francisco! Go to rsaconference.com/securityweekly-us19 to register now using the discount code 5U9SWFD to receive $100 off a full conference pass! If you are interested in booking an interview or briefing with Security Weekly, please go to securityweekly.com/conferencerequest to submit your request!
    • Join us April 1-3, at Disney's Contemporary Resort for InfoSec World 2019 where you can connect and network with like-minded individuals in search of actionable information. Visit https://infosecworld.misti.com/ and use the registration code OS19-SECWEEK for 15% off the Main Conference or World Pass. If you are interested in booking an interview or briefing with Security Weekly, please go to securityweekly.com/conferencerequest to submit your request!
    • Registration is now open for the first Security Weekly webcast of 2019! You can register for our "Rise Above Complex Workflows: Practical Ways To Accelerate Incident Response" webcast now by going to securityweekly.com/webcasts.



    Topic: Machine Learning (Book: Minority Report)

    - Alright, so let's talk about Minority Report and ML/AI first. In 1956, PKD wrote about the idea of stopping crimes before they happen. Now, he was using "pre cognition" which were these mutant humans who could see potential futures and as such could predict that someone was going to do something bad and therefore they could be arrested before they got around to the crime.

    - Think about it, what if you could have predicted the holocaust and the people involved? I mean, you would stop that right? Arrest the people who are "going" to do it and prevent it. It's a pretty easy sales pitch. I mean, police have been essentially doing that sort of thing all along in the sense that if they see a car weaving on the road, maybe the driver is drunk. That's called probable cause in the United States. What if we could machine learning to predict who will drive drunk? Machine Learning is about just that, using techniques to crunch numbers and essentially profile people or predict behavior based on data. Techniques like linear or logarithmic regression, discriminant analysis, LISREL, etc. are all things that we use all the time to try and create predictions. Machine learning can do it faster and maybe in ways we didn't expect by looking at massive databases of behavior. As society starts to move towards a data model, meaning that we can really deep dive into behavior using credit card records, court records, etc. All of that starts to create the massive big data style collections that ML loves to chew on and find patterns. How many things have you bought lately that you used cash or bitcoin only to purchase? Think about that. If you pull down my credit card statements for this last month, you would see where I went, that was eating and drinking in Moscow, driving on the Van Wyck, parking at JFK, you name it, it's all there. What if those added up to something in a model. ML might be able to predict what I would this week if all that were available (hint: it's boring).

    • So, the idea that police would use this kind of technique starts to become intriguing. The NYPD is using something called Patternizr to search crime records and compare things. It's not a big leap from just doing police work in the sense that if you were able to cross reference things like "wore a Nixon mask" to tie crimes together, you might be able to figure something out. If a string of bank robberies occur using a Nixon mask and the banks are all in BedStuy or somewhere, you might surveil some other banks in the area. Again, not a real big leap. But what about starting to use history, behavior, and patterns to predict who will actually do the next crime?



    • Currently, all this data is NOT tied together but we are heading that direction. Linking a person to their health records, their criminal history, their credit score, and their credit card behavior is not decades away but is almost possible today. Certainly, privacy advocates scream about this, but when horrific events occur, it becomes easier to get people to trade privacy for safety pretty fast. Think about the Patriot Act, passed after 9/11, this thing really assaulted privacy for the sake of security. Massive gun attacks like in New Zealand lead to moves in this direction as well. There have been continual calls for exchanging data across borders as a result of this horrible attack.



    So back to ML. ML is great at chewing on data and finding patterns. Sometimes simple patterns which were previously unseen are found by just running all possible algorithims on a massive data set and finding a significant result. Explain significant briefly.

    • I could really see ML being used heavily to predict behavior. That's where PKD starts to be in the picture. What if you could stop a school shooting? We have one about once a week now in the States. What if three things predicted this: 1) The amount of Jello eaten; 2) The length of your hair; and 3) your shoe size. It's silly but what if? What should we do? So, if we have cameras everywhere (even in the grocery store robot) and we could scan a crowd, identify the people, and then cross reference the three things, should we detain that person? Maybe. I can really see how PKD got to this idea.



    • What about social media? ML was being used to predict ways in which people might vote and messages that would sway them in the last election. Again, this is nothing new. People were trying to do this for years with polling and so forth, but ML really brings it into focus. Suddenly, we saw people be manipulated by messaging very rapidly and with great effect. Scary.



    • What about good citizen programs? China is doing it. Black Mirror Nosedive, is a great example of this. Make a post against the government, lose 10 points. Praise the Dear Leader, get plus 50. Attain VIP status, gold status, platinum status. We are certainly wired that way. ML, again, can dig through those patterns and decide what to reward and what to penalize. ML can then start to predict who is a good citizen, who will vote for dear leader, who will be a radical or a revolutionary. Should we stamp that out? I know, a resounding no from the crowd. But, again, what if you could save lives and prevent crime? What if you could save a child from being molested by having ML figure out who was going to be molested and who was going to be the molester. I can see it. That's what PKD was trying to say (he was a genius btw), that it's not that simple, just privacy or no privacy.



    • ML gets more powerful all the time as horsepower increases in the computing world. When we can link this to databases that contain vast amounts of information about behavior, we suddenly have the capability to really push into this realm. I don't know if it's good, bad, or indifferent, but it seems to be coming.



    • Now AI, AI is something else. It means something we don't understand, a true evolutionary intelligence that will likely supercede human capability. AI may decide we need to be removed, a la Skynet or start predicting not one move ahead but hundreds. So my modern PKD story idea is this: What if AI could predict not just your behavior but the behavior of your grandchildren? Should we maybe neuter you today so as to make for a better tomorrow?



    • Oh well, maybe we should all convert all our money into gold bars and shotgun shells and go live in a shack in Red Cloud. No wifi, no phone, electricity by peddling a bike, don't eat the Soylent Green, you kind of thing. Well, that doesn't sound so good to me. Hey Siri, get me a beer.