SDL Episode98

From Paul's Security Weekly
Jump to: navigation, search

Recorded on February 5, 2019 at G-Unit Studios in Rhode Island!

Hosts

  • Russell Beauchemin
    Cybersecurity & Network Security Program Advisor and Director of Instructional Support & Learning Innovation at Roger Williams University.
  • Doug White
    Cybersecurity professor, President of Secure Technology, and Security Weekly network host.
  • Announcements

    • RSA Conference 2019 is coming up March 4 – 8 in San Francisco! Go to rsaconference.com/securityweekly-us19 to register now using the discount code 5U9SWFD to receive $100 off a full conference pass! If you are interested in booking an interview or briefing with Security Weekly, please go to securityweekly.com/conferencerequest to submit your request!
    • Join us April 1-3, at Disney's Contemporary Resort for InfoSec World 2019 where you can connect and network with like-minded individuals in search of actionable information. Visit https://infosecworld.misti.com/ and use the registration code OS19-SECWEEK for 15% off the Main Conference or World Pass. If you are interested in booking an interview or briefing with Security Weekly, please go to securityweekly.com/conferencerequest to submit your request!
    • Check out our On-Demand material! Some of our previously recorded webcasts are now available On-Demand at: securityweekly.com/ondemand.

    Topic:



    • Alright, this is a really important idea. It's another one of those things that kind of came from a mainframe idea, a shell or an instance, that was used to control how things worked. It also controlled what someone could do.



    • But first, let's talk about virtual machines a little bit. The idea of a virtual machine is definitely mainframey. I mean seriously. A vm was the idea that we take a piece hardware, create a "shell environment" and then inside that shell, we emulate other hardware. There was a time when drivers, hardware, you name it was a total nightmare of floppy disks, installs, blue screens of death, screaming, hair on fire, types of experiences. In fact, when you wanted to run something like *nix on a piece of hardware, well good luck. The advent of more demanding operating systems also meant issues with hardware that created a great of difficulty finding drivers, etc. So, what do to. Well, what if you could standardize all the hardware, VIRTUALLY. Ok, let's say you have an Intel motherboard and a soundblaster sound card. Now each of those things has a set of drivers that are created by the manufacturer (proprietary) for any given version of an operating system. Great, but what if there is no soundblaster driver for gentoo linux. Well, the answer is, you can get a different sound card, write your own driver. But, what if we could write an "abstraction layer" that exists between the real hardware and some "virtual hardware". Let's call it puremobo and puresound. So, we need an abstraction layer that interacts between pure and real. So, that can become a standard driver for mostly standard hardware. Then, I can create a shell that can contain gentoo linux that has a bunch of very standard pure drivers that have abstraction layers to access the real drivers on the bare metal. Gentoo is then installed in the shell and doesn't know that it is running on virtual hardware. That means it works. If you log into it remotely, you probably don't know it's not real either. So, this idea became the basis of vms. The other thing about vms is that they allow you to consolidate and share hardware (more mainframe stuff). The mainframe was an expensive piece of equipment that no one person could really afford (except maybe a millionaire, and yes back then a millionaire was a big deal). By slicing up the hardware virtually, I could allocate a processor or piece of a processor to you (along with memory and so forth) and you could share all that with a large group of people. Again, that is what a VM does. Take a big monster machine (or even a little one) and suddenly you can share things. This idea became very popular in the context even a desktop computer since people wanted to be able to run Windows, Linux, who knows what, all on the same machine and maybe even at the same time. If you have enough hardware capacity, the vms don't care and can share out a lot.



    • Some of the big makers are vmware, hyperv, virtual box (free from sun), etc. Suddenly, you can use linux for things linux is good at doing and so forth. All these things are very good at sharing resources.



    • So, let's take it step further and talk about a video game called Invisible Sun: The Coming Storm. Now, if you want to write this software and sell it, one of your biggest challenges will be stability across platforms. That way when a person running Windows 7 and another person running Windows 10 want to run the game, they both run very stable. Tech support for games is challenging and it creates a lot of problems if the game crashes or won't run for a lot of people.



    • So, one solution to this would be to tell all your customers, look, install virtual box, then install ubuntu linux, then install the game in ubuntu and it will run. But, that leads to a lot of technical problems in and of itself. But what if I could go ahead and set all that up with all the dependencies and secure it so no one could mess with it? That's a container.



    • So, back to invisible sun. Let's say that IS requires C++, the Gnu C Libraries, another application called boingo, and a video library called GLUI to be installed on the host system. Traditionally, when you install the game, all this stuff has to be installed on the host. Lots of problems emerge. Boingo causes soundblaster drivers to crash, GLUI conflicts with .NET, I don't know, crazy stuff. If you ever installed a really big game, you know the feeling. In the container world, we could bundle all these things together into a container and get them stable. When you want the container, the container is just this object that gets instantiated and runs. Everything it needs is inside the container and an abstraction layer interacts with the operating system to manage the hardware, just like the vm.



    • What does all that mean? Well, 1) It means that the container is a standalone object that has no dependencies on you installing things on your local system. It just runs. Inputs and outputs go into and come out of this black hole but what goes on inside it is stable. 2) It means that the container, effectively, cannot impact the local system other than in terms of resources. 3) It means that if you have a validated container, it is not going to have malware, etc. (unless of course that was in there already). It also means that the container may not be able to interact with anything else except in a very secure way.



    • So, in the end, what's the difference, they are really the same thing only the container is much more developed than the VM. The example I will use is a car. If I go in my garage and I want to build a car. I have to follow some basic rules that will allow the car I build to interact with the world in a normal way (like it needs lights and wheels etc.). That's a VM. Now, if I go down to the Tesla dealer and say "sell me a Tesla X right now, here's a box of money". I expect that the Tesla will already have all those things that I expect and can be used right away and will function normally. Now, the vm car can be massively customized, but that is guaranteed to create a lot of issues unless you are good at engineering cars. The Tesla is ready to go (if anyone wants to buy me a Tesla X, just email me where to pick it up. Thanks) when you pick it up. That's a container.



    • docker (docker.com) is a big provider of container tools and containers that are already built. Certainly, you can build your own (so now you are buying the basic setup and adding things to it) and there are products to manage all the containers you build (like kubernetes) so it's a big opportunity for developers to create consistent environments for software.



    • One more example. Let's say you have an office and every employee needs office, a product called smakMe, and salesSnake. In the old world, we have to install an image,m which may need lots of different drivers (unless we happen to have very homogeneous hardware) and support. With docker, we could roll out a container that contains all the things these products need and will run on any windows based system. We can use kubernetes to roll them out and see that the containers stay consistent. That means they can be authenticated and if they change they can be disabled. Likewise, updates and changes to the containers can also be rolled out seamlessly to all the employees.



    • In the future...I think we will see more of these containers being used to quickly load things on your system. If Steam wanted to build containers on the fly that contained your gaming library, they would essentially create that container with all your games and it would basically only interact with your OS kernel, not your whole system. This means the game sellers would have more control over the stability of the games and could keep them consistent. Your whole system could then become one big collection of containers that interact in predictable ways and "secure" ways because they could only interact in the manner the containers and your system allowed (so one couldn't likely write into another and make changes). This would mean malware would have a much greater challenge if you downloaded it since it couldn't get inside the containers as readily as it can sneak into your day to day apps.



    • The cloud greatly facilitates this as the cloud can house meta instances of containers that are instantiated on the fly locally as you need them. To the user, it's seamless, but this instance of your email is really being housed in the cloud and only run locally when you need it. When you don't it all goes away. This means you reduce your footprint for hardware locally.



    • The downside is loss of control of your stuff. Since it all moves into a container that you may not control (like say Steam), you may have to rely on that service very heavily if you want to keep it and use it. It also means that you may have less opportunity to customize locally and do what you want. But, unfortunately, security issues are already driving these models and convenience, well, that is really making everyone jump on this conceptually, even if everything is not containerized yet (think google, apple, microsoft, etc.). While the containers are not quite there yet in terms of your browsing, wordprocessing, etc. they will be very soon. Get ready.