SDL Episode31

From Paul's Security Weekly
Jump to: navigation, search

Secure Digital Life - Episode 31

Recorded September 19, 2017 in Rhode Island!

Episode Audio

Coming Soon!

Hosts

  • Doug White
    Cybersecurity professor, President of Secure Technology, and Security Weekly network host.
  • Russell Beauchemin
    Cybersecurity & Network Security Program Advisor and Director of Instructional Support & Learning Innovation at Roger Williams University.
  • Killer Robots

    Killer Robots, Oh My!

    Elon Musk and 116 people sent a letter to the UN asking that Autonomous Weapons be banned Drones, tanks, automated machine guns.

    • “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.
    • “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
    • Asimov's three rules of robotics (handbook of Robotics, 56th ed. 2058AD)

    - A robot may not injure a human being or, through inaction, allow a human being to come to harm - A robot must obey the orders given it by human beings except where such orders would conflict with the first law - A robot must protect its own existence as long as such a protection does not conflict with the first or second laws

    • So, what is Artificial Intelligence?

    - The Turing Test -- Can a computer program "fool" a panel of judges who cannot determine which is the human?

    - Is a Chess Program like Deep Blue, AI? - Why would you want to emulate human behavior anyway?

    - AI is really about problem solving using knowledge and the ability to develop new solutions to problems or even cognating new approaches.

    - Example: Automobiles - Humans drive automobiles in ways that were designed not to optimize driving but rather to optimize travel with the conditions of human failings (response times, distraction, etc.) - AI drives to optimize safety with minimal time - So...cars are very close together, may use all lanes in all directions, etc.

    - AI could surpass the singularity (explain) easily.

    - The original goal was to emulate humans, but this is no longer the goal nor should it be. Humans suck.

    - Could AI design new AI? Of course. This could exceed the singularity and make systems incomprehensible to humans (well unmodified humans anyway).

    • How does skynet get activated? Well, say you have an AI and it's told to extinguish all threats. Ouch. I mean humans suck. They are messy and troublesome. Clean it up.
    • Should you be scared of terminators? I think so.
    • Morality? Well, if you figure that one out, let us know.
    • So, what about Logan's Run? The City was deciding that the best way to deal with humans was to let them live for 30 years (really 31 since the City was written in C and started counting at 0 like it's supposed to)


    https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war

    • Asimov, Isaac. Runaround (short story, 1942), I, Robot, et. al.
    • Also referenced: Terminator (1984)
    • You should read:

    - 1984 -- George Orwell - Minority Report -- Phillip K. Dick - Logan's Run (Nolan and Johnson)

    • You should watch:

    - Terminator (1984) - Robocop (1987) - Logan's Run (1976)