Falco is a cloud-native runtime security system that works with both containers and raw Linux hosts. It is developed by Sysdig and is a sandbox project in the Cloud Native Computing Foundation. Falco works by looking at file changes, network activity, the process table, and other data for suspicious behavior and then sending alerts through a pluggable back end. It inspects events at the system call level of a host through a kernel module or an extended BPF probe. Falco contains a rich set of rules that you can edit for flagging specific abnormal behaviors and for creating allow lists for normal computer operations.

In this tutorial, you learn to install and set up Falco on a Kubernetes cluster on IBM Cloud, create a synthetic security incident, and view the incident in Falco. Finally, you wire up Falco to send security alerts at run time to Slack. This tutorial works equally well on standard Kubernetes and on Red Hat OpenShift on IBM Cloud.

Prerequisites

Before you begin, you need the following software:

You need to get the source code here:

$ git clone https://gitlab.com/nibalizer/falco-iks
$ cd falco-iks

Verify that you have an IBM Cloud Kubernetes Service cluster set up and configured:

$ ibmcloud ks cluster get nibz-nightly-2019-03-29
Retrieving cluster nibz-nightly-2019-03-29...
OK


Name:                   nibz-nightly-2019-03-29
ID:                     0bddeb0c936d4b5a831849a399022389
State:                  normal
Created:                2019-03-29T08:01:08+0000
Location:               wdc06
Master URL:             https://c1.us-east.containers.cloud.ibm.com:30611
Master Location:        Washington D.C.
Master Status:          Ready (12 hours ago)
Ingress Subdomain:      nibz-nightly-2019-03-29.us-east.containers.appdomain.cloud
Ingress Secret:         nibz-nightly-2019-03-29
Workers:                3
Worker Zones:           wdc06
Version:                1.12.6_1546
Owner:                  skrum@us.ibm.com
Monitoring Dashboard:   -
Resource Group ID:      2a926a9173174d94a6eb13284e089f88
Resource Group Name:    default
$ kubectl get nodes -o wide
NAME             STATUS   ROLES    AGE   VERSION       INTERNAL-IP      EXTERNAL-IP      OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
10.188.103.223   Ready    <none>   12h   v1.12.6+IKS   10.188.103.223   169.63.131.195   Ubuntu 16.04.6 LTS   4.4.0-143-generic   containerd://1.1.6
10.188.103.242   Ready    <none>   12h   v1.12.6+IKS   10.188.103.242   169.63.131.201   Ubuntu 16.04.6 LTS   4.4.0-143-generic   containerd://1.1.6
10.188.103.248   Ready    <none>   12h   v1.12.6+IKS   10.188.103.248   169.63.131.248   Ubuntu 16.04.6 LTS   4.4.0-143-generic   containerd://1.1.6

The container runtime environment is containerd.

Estimated time

Completing this tutorial should take about 20 minutes.

Configure Falco for your needed tasks with Kubernetes

This tutorial uses the k8s-with-rbac files, because of the Kubernetes 1.12 environment. Falco requires setting up several files: a combination of Kubernetes configuration to run the Falco daemonset and configuration for the daemonset itself.

1

: Review the Falco files

Falco uses a service account in Kubernetes to access the Kubernetes API. The falco-account.yaml spec sets up a common role-based access control triple approach: a ServiceAccount, a ClusterRole, and a ClusterRoleBinding. The ClusterRole has the information around what access is being given. If you change nothing in these files, the Falco daemon can only read and list, but not modify any object in the Kubernetes API.

$ ls -l
total 20
-rw-r--r-- 1 nibz nibz  931 Mar 29 15:46 falco-account.yaml
drwxr-xr-x 2 nibz nibz 4096 Mar 29 15:51 falco-config/
-rw-r--r-- 1 nibz nibz 2138 Mar 29 15:48 falco-daemonset-configmap.yaml
-rw-r--r-- 1 nibz nibz  196 Mar 29 15:45 falco-service.yaml
-rw-r--r-- 1 nibz nibz   13 Mar 29 15:27 Readme.md
2

: Configure role-based access control (RBAC)

Run the following command:

$ kubectl apply -f falco-account.yaml
serviceaccount/falco-account created
clusterrole.rbac.authorization.k8s.io/falco-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/falco-cluster-role-binding created

If you are using OpenShift, set up the OpenShift Security Context Constraints for the account you just created.

$ oc adm policy add-scc-to-user privileged -z falco-account
3

: Apply the Falco service object

Falco also needs a Kubernetes service for its web front end:

$ kubectl apply -f falco-service.yaml
service/falco-service created
4

: Create the falco-config ConfigMap

Falco’s configuration is split up into several files. falco.yaml refers to configuration of the daemon’s particulars: output type, ports, etc. The other *_rules.yaml files contain the checks that Falco fires against (shells being opened, files being modified, etc.). Combine all of these files into a single ConfigMap by using the --from-file argument with a directory:

$ kubectl create configmap falco-config --from-file=falco-config
configmap/falco-config created

Later, in the deployment of the daemonset, this configmap is mounted under /etc/falco.

5

: Start the Falco DaemonSet

Finally, run the Falco application. It is run as a DaemonSet, which enables you to run one per node:

$ kubectl apply -f falco-daemonset-configmap.yaml
daemonset.extensions/falco-daemonset created

During its first-run installation, it uses Dynamic Kernel Module Support (DKMS) to compile and install a kernel module, which is how Falco picks up the system calls.

6

: Check that the pods started correctly

Run the following command to view the pods and their status:

nibz@shockley:~/projects/falco/install-falco-iks/git-repo$ kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
falco-daemonset-99p8j   1/1     Running   0          26s
falco-daemonset-wf2lf   1/1     Running   0          26s
falco-daemonset-wqrwm   1/1     Running   0          26s
7

: Check the logs and read DKMS messages

Check the logs and note the messages from DKMS. You see that Sysdig is a kernel module:

nibz@shockley:~/projects/falco/install-falco-iks/git-repo$ kubectl logs falco-daemonset-wf2lf
* Setting up /usr/src links from host
* Unloading falco-probe, if present
* Running dkms install for falco

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
make -j2 KERNELRELEASE=4.4.0-148-generic -C /lib/modules/4.4.0-148-generic/build M=/var/lib/dkms/falco/0.1.2780dev/build....
cleaning build area...

DKMS: build completed.

falco-probe.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.4.0-148-generic/kernel/extra/
mkdir: cannot create directory '/lib/modules/4.4.0-148-generic/kernel/extra': Read-only file system
cp: cannot create regular file '/lib/modules/4.4.0-148-generic/kernel/extra/falco-probe.ko': No such file or directory

depmod...

DKMS: install completed.
* Trying to load a dkms falco-probe, if present
falco-probe found and loaded in dkms
Wed May 29 14:55:40 2019: Falco initialized with configuration file /etc/falco/falco.yaml
Wed May 29 14:55:40 2019: Loading rules from file /etc/falco/falco_rules.yaml:
Wed May 29 14:55:40 2019: Loading rules from file /etc/falco/falco_rules.local.yaml:
Wed May 29 14:55:40 2019: Loading rules from file /etc/falco/k8s_audit_rules.yaml:
Wed May 29 14:55:41 2019: Starting internal webserver, listening on port 8765
{"output":"00:00:00.048356736: Informational Container with sensitive mount started (user=root command=container:7c5302fccfcb k8s.ns=<NA> k8s.pod=<NA> container=7c5302fccfcb image=registry.ng.bluemix.net/armada-master/ibm-kube-fluentd-collector:c16fe1602ab65db4af0a6ac008f99ca2a526e6f6 mounts=/etc/kubernetes/:/etc/kubernetes::false:private,/:/host::false:private,/var/log/:/var/log::false:private,/var/lib/docker:/var/lib/docker::false:private,/var/run/docker.sock:/var/run/docker.sock::false:private,/mnt/ibm-kube-fluentd-persist:/mnt/ibm-kube-fluentd-persist::true:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~secret/logmet-secrets-volume:/mnt/logmet/secrets::false:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~configmap/fluentd-config:/fluentd/etc/config.d/logmet/::false:private,/var/data:/var/data::false:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~configmap/at-fluentd-config:/fluentd/etc/config.d/at/::false:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~secret/activity-tracker-secrets-volume:/mnt/activity-tracker/secrets/::false:private,/var/log/at:/var/log/at::false:private,/var/log/at-no-rotate:/var/log/at-no-rotate::false:private,/run/containerd:/run/containerd::false:private,/run/containerd/containerd.sock:/run/containerd/containerd.sock::true:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~secret/ibm-kube-fluentd-token-v2q5s:/var/run/secrets/kubernetes.io/serviceaccount::false:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/etc-hosts:/etc/hosts::true:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/containers/fluentd/35f64419:/dev/termination-log::true:private) k8s.ns=<NA> k8s.pod=<NA> container=7c5302fccfcb","priority":"Informational","rule":"Launch Sensitive Mount Container","time":"1970-01-01T00:00:00.048356736Z", "output_fields": {"container.id":"7c5302fccfcb","container.image.repository":"registry.ng.bluemix.net/armada-master/ibm-kube-fluentd-collector","container.image.tag":"c16fe1602ab65db4af0a6ac008f99ca2a526e6f6","container.mounts":"/etc/kubernetes/:/etc/kubernetes::false:private,/:/host::false:private,/var/log/:/var/log::false:private,/var/lib/docker:/var/lib/docker::false:private,/var/run/docker.sock:/var/run/docker.sock::false:private,/mnt/ibm-kube-fluentd-persist:/mnt/ibm-kube-fluentd-persist::true:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~secret/logmet-secrets-volume:/mnt/logmet/secrets::false:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~configmap/fluentd-config:/fluentd/etc/config.d/logmet/::false:private,/var/data:/var/data::false:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~configmap/at-fluentd-config:/fluentd/etc/config.d/at/::false:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~secret/activity-tracker-secrets-volume:/mnt/activity-tracker/secrets/::false:private,/var/log/at:/var/log/at::false:private,/var/log/at-no-rotate:/var/log/at-no-rotate::false:private,/run/containerd:/run/containerd::false:private,/run/containerd/containerd.sock:/run/containerd/containerd.sock::true:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~secret/ibm-kube-fluentd-token-v2q5s:/var/run/secrets/kubernetes.io/serviceaccount::false:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/etc-hosts:/etc/hosts::true:private,/var/data/kubelet/pods/8bf0a002-81eb-11e9-b9cf-c68b81a15994/containers/fluentd/35f64419:/dev/termination-log::true:private","evt.time":48356736,"k8s.ns.name":null,"k8s.pod.name":null,"proc.cmdline":"container:7c5302fccfcb","user.name":"root"}}
{"output":"00:00:00.048356736: Informational Container with sensitive mount started (user=<NA> command=container:721ef5130945 k8s.ns=<NA> k8s.pod=<NA> container=721ef5130945 image=docker.io/falcosecurity/falco:dev mounts=/run/containerd/containerd.sock:/host/run/containerd/containerd.sock::true:private,/dev:/host/dev::true:private,/proc:/host/proc::false:private,/boot:/host/boot::false:private,/lib/modules:/host/lib/modules::false:private,/usr:/host/usr::false:private,/etc:/host/etc/::false:private,/var/data/kubelet/pods/c09f7a8d-8221-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~configmap/falco-config:/etc/falco::false:private,/var/data/kubelet/pods/c09f7a8d-8221-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~secret/falco-account-token-jlddc:/var/run/secrets/kubernetes.io/serviceaccount::false:private,/var/data/kubelet/pods/c09f7a8d-8221-11e9-b9cf-c68b81a15994/etc-hosts:/etc/hosts::true:private,/var/data/kubelet/pods/c09f7a8d-8221-11e9-b9cf-c68b81a15994/containers/falco/aa75ae83:/dev/termination-log::true:private) k8s.ns=<NA> k8s.pod=<NA> container=721ef5130945","priority":"Informational","rule":"Launch Sensitive Mount Container","time":"1970-01-01T00:00:00.048356736Z", "output_fields": {"container.id":"721ef5130945","container.image.repository":"docker.io/falcosecurity/falco","container.image.tag":"dev","container.mounts":"/run/containerd/containerd.sock:/host/run/containerd/containerd.sock::true:private,/dev:/host/dev::true:private,/proc:/host/proc::false:private,/boot:/host/boot::false:private,/lib/modules:/host/lib/modules::false:private,/usr:/host/usr::false:private,/etc:/host/etc/::false:private,/var/data/kubelet/pods/c09f7a8d-8221-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~configmap/falco-config:/etc/falco::false:private,/var/data/kubelet/pods/c09f7a8d-8221-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~secret/falco-account-token-jlddc:/var/run/secrets/kubernetes.io/serviceaccount::false:private,/var/data/kubelet/pods/c09f7a8d-8221-11e9-b9cf-c68b81a15994/etc-hosts:/etc/hosts::true:private,/var/data/kubelet/pods/c09f7a8d-8221-11e9-b9cf-c68b81a15994/containers/falco/aa75ae83:/dev/termination-log::true:private","evt.time":48356736,"k8s.ns.name":null,"k8s.pod.name":null,"proc.cmdline":"container:721ef5130945","user.name":null}}

Falco is running and logging events.

Review the configuration for Falco

Now, take a peek at the configuration you set up for Falco.

1

: Examine the DaemonSet configuration

Run the following command to tell the DaemonSet to run with the service account and permissions you set up earlier:

$ cat falco-daemonset-configmap.yaml | grep serviceAcc
      serviceAccount: falco-account

Check out falco-account.yaml for details. This configuration provides read access to almost everything in the Kubernetes API server.

You mount many important directories from the Kubernetes host into the Falco pod:

$ cat falco-daemonset-configmap.yaml | grep -A 11 volumes:
      volumes:
        - name: containerd-socket
          hostPath:
            path: /run/containerd/containerd.sock
        - name: dev-fs
          hostPath:
            path: /dev
        - name: proc-fs
          hostPath:
            path: /proc
        - name: boot-fs
          hostPath:

This step enables Falco to interact with the container runtime environment to pull container metadata (like the container name and underlying image name) and to query the host’s process table to discover process names. Also note that this example maps the containerd-socket instead of a docker-socket.

2

: Examine the Falco configuration files

Falco is configured by several YAML files that we set up via a ConfigMap. falco.yaml configures server settings and falco_rules.yaml contains rules for what to alert on and at what level.

$ ls falco-config/
falco_rules.local.yaml  falco_rules.yaml  falco.yaml  k8s_audit_rules.yaml
3

: View a Falco rule

This rule watches for potentially nefarious Netcat commands and throws alerts when it sees them at the WARNING level.

$ cat falco-config/falco_rules.yaml | grep -A 12 'Netcat Remote'
- rule: Netcat Remote Code Execution in Container
  desc: Netcat Program runs inside container that allows remote code execution
  condition: >
    spawned_process and container and
    ((proc.name = "nc" and (proc.args contains "-e" or proc.args contains "-c")) or
     (proc.name = "ncat" and (proc.args contains "--sh-exec" or proc.args contains "--exec"))
    )
  output: >
    Netcat runs inside container that allows remote code execution (user=%user.name
    command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
  priority: WARNING
  tags: [network, process]

Watch Falco in action

Now you can see Falco in action. You tail the logs in one terminal, and then synthetically create some events in the other terminal and watch the events come through the logs.

1

: Tail the logs in the first terminal

Run the following command:

$ kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
falco-daemonset-99p8j   1/1     Running   0          112m
falco-daemonset-wf2lf   1/1     Running   0          112m
falco-daemonset-wqrwm   1/1     Running   0          112m
$ kubectl logs -f falco-daemonset-99p8j
* Setting up /usr/src links from host
* Unloading falco-probe, if present
* Running dkms install for falco

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
make -j2 KERNELRELEASE=4.4.0-148-generic -C /lib/modules/4.4.0-148-generic/build M=/var/lib/dkms/falco/0.1.2780dev/build....
cleaning build area...

DKMS: build completed.

falco-probe.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.4.0-148-generic/kernel/extra/
mkdir: cannot create directory '/lib/modules/4.4.0-148-generic/kernel/extra': Read-only file system
cp: cannot create regular file '/lib/modules/4.4.0-148-generic/kernel/extra/falco-probe.ko': No such file or directory

depmod...

DKMS: install completed.
* Trying to load a dkms falco-probe, if present
falco-probe found and loaded in dkms
Wed May 29 14:55:36 2019: Falco initialized with configuration file /etc/falco/falco.yaml
Wed May 29 14:55:36 2019: Loading rules from file /etc/falco/falco_rules.yaml:
Wed May 29 14:55:37 2019: Loading rules from file /etc/falco/falco_rules.local.yaml:
Wed May 29 14:55:37 2019: Loading rules from file /etc/falco/k8s_audit_rules.yaml:
Wed May 29 14:55:38 2019: Starting internal webserver, listening on port 8765
{"output":"00:00:00.020155776: Informational Container with sensitive mount started (user=<NA> command=container:9d56002def78 k8s.ns=<NA> k8s.pod=<NA> container=9d56002def78 image=docker.io/falcosecurity/falco:dev mounts=/run/containerd/containerd.sock:/host/run/containerd/containerd.sock::true:private,/dev:/host/dev::true:private,/proc:/host/proc::false:private,/boot:/host/boot::false:private,/lib/modules:/host/lib/modules::false:private,/usr:/host/usr::false:private,/etc:/host/etc/::false:private,/var/data/kubelet/pods/c0a1a131-8221-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~configmap/falco-config:/etc/falco::false:private,/var/data/kubelet/pods/c0a1a131-8221-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~secret/falco-account-token-jlddc:/var/run/secrets/kubernetes.io/serviceaccount::false:private,/var/data/kubelet/pods/c0a1a131-8221-11e9-b9cf-c68b81a15994/etc-hosts:/etc/hosts::true:private,/var/data/kubelet/pods/c0a1a131-8221-11e9-b9cf-c68b81a15994/containers/falco/cb0bed6e:/dev/termination-log::true:private) k8s.ns=<NA> k8s.pod=<NA> container=9d56002def78","priority":"Informational","rule":"Launch Sensitive Mount Container","time":"1970-01-01T00:00:00.020155776Z", "output_fields": {"container.id":"9d56002def78","container.image.repository":"docker.io/falcosecurity/falco","container.image.tag":"dev","container.mounts":"/run/containerd/containerd.sock:/host/run/containerd/containerd.sock::true:private,/dev:/host/dev::true:private,/proc:/host/proc::false:private,/boot:/host/boot::false:private,/lib/modules:/host/lib/modules::false:private,/usr:/host/usr::false:private,/etc:/host/etc/::false:private,/var/data/kubelet/pods/c0a1a131-8221-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~configmap/falco-config:/etc/falco::false:private,/var/data/kubelet/pods/c0a1a131-8221-11e9-b9cf-c68b81a15994/volumes/kubernetes.io~secret/falco-account-token-jlddc:/var/run/secrets/kubernetes.io/serviceaccount::false:private,/var/data/kubelet/pods/c0a1a131-8221-11e9-b9cf-c68b81a15994/etc-hosts:/etc/hosts::true:private,/var/data/kubelet/pods/c0a1a131-8221-11e9-b9cf-c68b81a15994/containers/falco/cb0bed6e:/dev/termination-log::true:private","evt.time":20155776,"k8s.ns.name":null,"k8s.pod.name":null,"proc.cmdline":"container:9d56002def78","user.name":null}}
...
2

: Create a security event in a second terminal

Remember to re-export KUBECONFIG:

$ export KUBECONFIG=/home/nibz/.bluemix/plugins/container-service/clusters/yourcluster.yml
$ kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
falco-daemonset-99p8j   1/1     Running   0          3h2m
falco-daemonset-wf2lf   1/1     Running   0          3h2m
falco-daemonset-wqrwm   1/1     Running   0          3h2m
nibz@shockley:~/projects/falco/install-falco-iks/git-repo$ kubectl  exec -it falco-daemonset-99p8j /bin/bash
root@falco-daemonset-99p8j:/# echo "I'm in!"
I'm in!
root@falco-daemonset-99p8j:/#

In the first terminal you can see the event:

{"output":"17:58:28.064781208: Notice A shell was spawned in a container with an attached terminal (user=root k8s.ns=default k8s.pod=falco-daemonset-99p8j container=9d56002def78 shell=bash parent=<NA> cmdline=bash terminal=34816) k8s.ns=default k8s.pod=falco-daemonset-99p8j container=9d56002def78","priority":"Notice","rule":"Terminal shell in container","time":"2019-05-29T17:58:28.064781208Z", "output_fields": {"container.id":"9d56002def78","evt.time":1559152708064781208,"k8s.ns.name":"default","k8s.pod.name":"falco-daemonset-99p8j","proc.cmdline":"bash","proc.name":"bash","proc.pname":null,"proc.tty":34816,"user.name":"root"}}
3

: Process the event with jq

When you process the event with jq, Falco gives useful information about the security event and the full Kubernetes context for the event, such as pod name and namespace:

$ echo '{"output":"17:58:28.064781208: Notice A shell was spawned in a container with an attached terminal (user=root k8s.ns=default k8s.pod=falco-daemonset-99p8j container=9d56002def78 shell=bash parent=<NA> cmdline=bash terminal=34816) k8s.ns=default k8s.pod=falco-daemonset-99p8j container=9d56002def78","priority":"Notice","rule":"Terminal shell in container","time":"2019-05-29T17:58:28.064781208Z", "output_fields": {"container.id":"9d56002def78","evt.time":1559152708064781208,"k8s.ns.name":"default","k8s.pod.name":"falco-daemonset-99p8j","proc.cmdline":"bash","proc.name":"bash","proc.pname":null,"proc.tty":34816,"user.name":"root"}}
> ' | jq '.'
{
  "output": "17:58:28.064781208: Notice A shell was spawned in a container with an attached terminal (user=root k8s.ns=default k8s.pod=falco-daemonset-99p8j container=9d56002def78 shell=bash parent=<NA> cmdline=bash terminal=34816) k8s.ns=default k8s.pod=falco-daemonset-99p8j container=9d56002def78",
  "priority": "Notice",
  "rule": "Terminal shell in container",
  "time": "2019-05-29T17:58:28.064781208Z",
  "output_fields": {
    "container.id": "9d56002def78",
    "evt.time": 1559152708064781300,
    "k8s.ns.name": "default",
    "k8s.pod.name": "falco-daemonset-99p8j",
    "proc.cmdline": "bash",
    "proc.name": "bash",
    "proc.pname": null,
    "proc.tty": 34816,
    "user.name": "root"
  }
}

Now you can trigger the Netcat rule you displayed earlier:

root@falco-daemonset-99p8j:/# nc -l 4444
^C
kubectl logs falco-daemonset-99p8j
...

{"output":"18:00:41.530249297: Notice Network tool launched in container (user=root command=nc -l 4444 container_id=9d56002def78 container_name=falco image=docker.io/falcosecurity/falco:dev) k8s.ns=default k8s.pod=falco-daemonset-99p8j container=9d56002def78 k8s.ns=default k8s.pod=falco-daemonset-99p8j container=9d56002def78","priority":"Notice","rule":"Lauch Suspicious Network Tool in Container","time":"2019-05-29T18:00:41.530249297Z", "output_fields": {"container.id":"9d56002def78","container.image.repository":"docker.io/falcosecurity/falco","container.image.tag":"dev","container.name":"falco","evt.time":1559152841530249297,"k8s.ns.name":"default","k8s.pod.name":"falco-daemonset-99p8j","proc.cmdline":"nc -l 4444","user.name":"root"}}
...
$ echo '{"output":"18:00:41.530249297: Notice Network tool launched in container (user=root command=nc -l 4444 contain    er_id=9d56002def78 container_name=falco image=docker.io/falcosecurity/falco:dev) k8s.ns=default k8s.pod=falco-    daemonset-99p8j container=9d56002def78 k8s.ns=default k8s.pod=falco-daemonset-99p8j container=9d56002def78","priority":"Notice","rule":"Lauch Suspicious Network Tool in Container","time":"2019-05-29T18:00:41.530249297Z",     "output_fields": {"container.id":"9d56002def78","container.image.repository":"docker.io/falcosecurity/falco",    "container.image.tag":"dev","container.name":"falco","evt.time":1559152841530249297,"k8s.ns.name":"default","k    8s.pod.name":"falco-daemonset-99p8j","proc.cmdline":"nc -l 4444","user.name":"root"}}' | jq '.'
{
  "output": "18:00:41.530249297: Notice Network tool launched in container (user=root command=nc -l 4444 contain    er_id=9d56002def78 container_name=falco image=docker.io/falcosecurity/falco:dev) k8s.ns=default k8s.pod=falco-    daemonset-99p8j container=9d56002def78 k8s.ns=default k8s.pod=falco-daemonset-99p8j container=9d56002def78",
  "priority": "Notice",
  "rule": "Lauch Suspicious Network Tool in Container",
  "time": "2019-05-29T18:00:41.530249297Z",
  "output_fields": {
    "container.id": "9d56002def78",
    "container.image.repository": "docker.io/falcosecurity/falco",
    "container.image.tag": "dev",
    "container.name": "falco",
    "evt.time": 1559152841530249200,
    "k8s.ns.name": "default",
    "k8s.pod.name": "falco-daemonset-99p8j",
    "proc.cmdline": "nc -l 4444",
    "user.name": "root"
  }
}

You saw what kind of events Falco can discover and a bit of how to configure them.

Send alerts to a Slack channel

Now, you can do something more interesting with the alert than just dumping it to standard output. Falco has a few different things you can do with alerts, but this tutorial shows how to send alerts to a Slack channel.

  1. Change falco-config/falco.yaml to include your slack webhook and change enabled to true.

    See how to create an incoming webhook in Slack at api.slack.com/incoming-webhooks.

    $ cat falco-config/falco.yaml | grep -A1 -B3 hooks.slack
    --
    program_output:
      enabled: false
      keep_alive: false
      program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"
    
    vim falco-config/falco.yaml
    
    $ cat falco-config/falco.yaml | grep -A1 -B3 hooks.slack
    program_output:
      enabled: true
      keep_alive: false
      program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXXXXXXXX/XXXXXXXXX/xxxxxxxxxxxxxxxxxxxxxxxx"
    
  2. Redo both the ConfigMap and the DaemonSet:

    $ kubectl delete configmap falco-config
    $ kubectl create configmap falco-config --from-file=falco-config
    $ kubectl delete -f falco-daemonset-configmap.yaml
    $ kubectl apply -f falco-daemonset-configmap.yaml
    
  3. Use the spawn command to start the shell, and then you see a security alert posted to Slack:

    slack-screenshot

Summary

You can do a lot more with Falco, including hooking up Falco events to serverless computing platforms such as OpenWhisk and Knative. Hopefully, this introduction gave you some basic information that helps you get started with your next project.

Acknowledgements

I send a big thanks to Michael Ducy (@mfdii) for helping me get this tutorial working.