SDL Episode91

From Paul's Security Weekly
Jump to: navigation, search

Secure Digital Life #91

Recorded on December 4, 2018 at G-Unit Studios in Rhode Island!

Hosts

  • Russell Beauchemin
    Cybersecurity & Network Security Program Advisor and Director of Instructional Support & Learning Innovation at Roger Williams University.
  • Doug White
    Cybersecurity professor, President of Secure Technology, and Security Weekly network host.
  • Announcements

    • If you are interested in quality over quantity and having meaningful conversations instead of just a badge scan, join us April 1-3, at Disney's Contemporary Resort for InfoSec World 2019 where you can connect and network with like-minded individuals in search of actionable information. Use the registration code OS19-SECWEEK for 15% off the Main Conference or World Pass.
    • Join us for our Webcast with Chronicle entitled "Intelligence Powered Malware Hunting". This webcast will be held December 5th @3-4pm EST. Go to securityweekly.com/chronicle to register now!
    • Go to https://go.stealthbits.com/2019trends to register for stealthBITS webcast "Emerging & Continuing Trends in 2019: Privacy Regulations, Active Directory Security & Machine Learning" for an in-depth discussion from Rod Simmons and Paul Asadoorian. You can also view their assessment at: https://www.stealthbits.com/assessment.

    Topic: Killer Robot Special



    Let's start out by talking about 2 things that everyone at PSW hates, the terms AI and ML. They don't hate the idea of it, just that everyone misuses the terms. This misuse has a deep history and basically, it's because of marketing and media. The term machine learning was "coined" I guess by Arthur Samuel back in the 50s as was the term artifical intelligence. Both these ideas were around for a long time. Turing wrote about these concepts in the 1930s and even in the mechanical age, people experimented with devices that could "learn" based on analog concepts.

    Donald Michie worked on the ideas of learning by "non natural" means for games and built the MENACE engine in 1960. But, at the end, there are essentially some key terms you should know. "Would you like to play a game?".

    1. Natural Intelligence -- This is the idea that we apply to humans who learn through what is essentially a "neural net" of rewards and punishment. Hunger is translated to a tipping point in your brain where at some point you would eat garbage if you got hungry enough. This neural net is very complex and the weights and balances get more intricate since morality, social norms, etc. are also involved so that you would probably eat garbage before you ate another human being, but then again...

    2. Machine Intelligence -- This is the idea that a non natural thing could also learn. But, the key, is that it uses some contrived notion. An example is a neural net that is programmed to determine when to change the thermostat based on past behavior or even more complex a "big data system" which uses traffic reports from millions of phones to predict and analyze traffic patterns to try and find the best route based on both current and past conditions.

    3. Artificial Intelligence -- This is the idea that some new form of behavior might evolve based on some starting point (like a human but not emulating human behavior). Rewards and punishment for a machine are not the same as for a biological so many different factors might come into play in such an evolution.

    4. False Intelligence -- (or what I like to call "dumb intelligence"). This is what most robots are actually. The old term for this was automaton. An automaton was just a device that did some mappable task over and over to reduce the amount of human work that was needed. For instance, putting a rivet in a piece of metal that comes by on an assembly line. Any mappable task could essentially be mapped into a machine if the technology existed to allow the machine to manipulate the situation.



    So, FI would only kill by accident, say you stand in between the automatic stabbing machine and the wall. MI might kill if it either made a mistake in prediction (like say sending you down a one way street because that street was reporting no traffic and the code didn't take one way into consideration). And natural intelligence kills all the time for both understood (I would really like your wallet please) and non understood (all people with red hair don't have souls) reasons. But when would AI kill. Well, that's what's scary. AI, if it existed might evolve all sorts of reasoning that we might not even understand. It's dangerous for us to try and intellectualize AI since by definition we probably can't understand it. For hundreds of years we have been trying to isolate and understand NI, and we still can't seem to manage it (sometimes a cigar is just a cigar) so, trying to extrapolate some artificial behavior about an AI system is probably pretty naive.

    But, of course, we can try. Let's say that we build an AI that learns to drive a car. NOT MI, actual AI. Let's say a human called Ward, decides to take his AI development Wally out for a spin and Wally observes Ward driving and determines that you need a pipe, a necktie, the keys, etc. in order to drive. Later, Ward and June go out for martinis and steaks and Wally decides to take the car out for a spin. Quickly, Wally learns that the pipe and necktie are stupid and tosses them out the window. Wally also decides that a more efficient route is on the sidewalk since there are fewer vehicles and the car itself will be undamaged by the meatbags that are exploding as the sheet metal bumper on this 1967 Chevrolet Biscayne strikes them. The windshield is quickly cleared of gore and Wally arrives most efficiently at the malt shop and proceeds to chat up Penny Jamison after sticking his arm through Eddie Haskell's torso as that was the most efficient route to the straws. Etc. See Bender.

    I always like to think about ants in this scenario as well. When you walk, you don't bother to stop and look around to ensure you might not step on an ant. It wouldn't be efficient so you have eliminated that from your rewards and punishment system (pruning the old neural net). Now, how far does that go, well, you probably wouldn't step on a kitten even though it decreases efficiency in the process. For some reason, a kitten, has more weight then an ant. How about a snake? Maybe you are afraid of the snake or the snake may bite you. All weights in the system.

    Now, once again, we are imposing our limited views on an AI which may not even use a neural net since it was designed by another AI and another AI. So, at what point does the AI sex robot, Priss, decide that 1) humans are pests and just starts killing, 2) humans provide no efficiency, so just starts killing, etc. Or maybe it even decides that humans aren't worth killing and just ignores them (viz. Wintermute). Then you get to the ants. Maybe in Wintermute's world, humans are like ants you just ignore them. Sometime they get stepped on, so what, and sometimes they sneak into your kitchen and make a mess so you spray them.

    So, maybe MI is a real threat in the short run and it's certainly the bigger threat to everyone's jobs. This and FI are where all the manufacturing jobs go so what is left for humans? Well, you could learn to code MI and FI equipment. That might work until the AI stuff decides that coding can be better done by AI and pats you on the head. This could be the first wave of true societal change in the sense that all of us get replaced by MI and FI and we can live in the Star Trek universe just doing what we like the most. (honestly, I never exactly figured out the Star Trek Economy, I mean are they communists, anarchists, they still like to gamble for strips of gold latinum and such).

    So, in the end, where does this go? Where do the humans fit in this model? Is it Blade Runner and we all just wander around the ruins eating noodles while the AIs, MIs, and FIs do all the work? Is it the Midas Plague, where we all beg to NOT have to consume? Or how about Venus, Inc. where the MIs figure out exactly how to manipulate us and Wintermute takes over running the whole show?

    Maybe we combine meatbots, MI, and AI all together into some sort of chimera?

    That brings us to the singularity. This is the idea that at some point, Wintermute will exist and that AI will really occur. When that happens, we don't really know what the result is because in the analogy we are the ants. AI could be some sort of benevolent Star Trek kind of world but it could also be Terminator. Skynet was self aware and decided that like the Darloks, extermination was the only way. Part of me is scared of that, but there is also the transhumanist kind of idea that maybe AI and humans combine somehow to create something better. I don't know, maybe it will just use us for fuel, a la the Matrix.

    References in todays show:

    • Neuromancer. 1987. William Gibson.
    • Blade Runner. 1982. Movie
    • The Matrix. 1999. Movie
    • War Games. 1983. Movie.
    • Leave it to Beaver. 1957 - 1963. Television
    • The Midas Plague. 1954. Frederik Pohl. Galaxy Magazine
    • Venus, Inc. (The Space Merchants) Frederik Pohl and C.M. Kornbluth. 1953.
    • Edwin Michie. MENACE, Machine Educable Naughts and Crosses Engine. 1960.
    • Arthur Samuel. Some Studies in Machine Learning Using the Game of Checkers". 1959. IBM Journal of Research and Development.
    • Alan Turing. Computing Machinery and Intelligence. 1950. Mind.
    • Alan Turing. On Computable Numbers, with an Application to the Entscheidungsproblem. 1937. Proceedings of the London Mathematical Society.
    • Dr. Who. Television.
    • Star Trek. Television.
    • If I missed any, email me and I will fill them in.