Falco is a cloud-native runtime security system that works with both containers and raw Linux hosts. It was developed by Sysdig and is an incubating project in the Cloud Native Computing Foundation. Falco works by looking at file changes, network activity, the process table, and other data for suspicious behavior and then sending alerts through a pluggable back end. It inspects events at the system call level of a host through a kernel module or an extended BPF probe. Falco contains a rich set of rules that you can edit for flagging specific abnormal behaviors and for creating allow lists for normal computer operations.
In this tutorial, you learn to install and set up Falco on a Kubernetes cluster on IBM Cloud, create a synthetic security incident, and view the incident in Falco. Then, you send all security incidents into LogDNA for aggregation. Finally, you wire up Falco to send security alerts at run time to Slack. This tutorial works equally well on standard Kubernetes and on Red Hat OpenShift on IBM Cloud.
Estimated time
Completing this tutorial should take about 20 minutes.
Prerequisites
Before you begin, you need the following software:
- A free IBM Cloud account
- Falco
- IBM Cloud Kubernetes Service
- Log DNA on the IBM Cloud (optional)
- Slack (optional)
The source code you need is in the helm chart:
$ git clone https://github.com/falcosecurity/charts
Cloning into 'charts'...
remote: Enumerating objects: 456, done.
remote: Counting objects: 100% (456/456), done.
remote: Compressing objects: 100% (158/158), done.
remote: Total 456 (delta 316), reused 416 (delta 295), pack-reused 0
Receiving objects: 100% (456/456), 182.27 KiB | 2.76 MiB/s, done.
Resolving deltas: 100% (316/316), done.
$ cd charts/falco
$ ls
Verify that you have an IBM Cloud Kubernetes Service cluster set up and configured:
$ ibmcloud ks cluster get --cluster nibz-development
Retrieving cluster nibz-development...
OK
Name: nibz-development
ID: br3dsptd0mfheg0375g0
State: normal
Created: 2020-05-21T20:02:47+0000
Location: dal12
Master URL: https://c108.us-south.containers.cloud.ibm.com:31236
Public Service Endpoint URL: https://c108.us-south.containers.cloud.ibm.com:31236
Private Service Endpoint URL: https://c108.private.us-south.containers.cloud.ibm.com:31236
Master Location: Dallas
Master Status: Ready (4 days ago)
Master State: deployed
Master Health: normal
Ingress Subdomain: nibz-development-dff43bc8701fcd5837d6de963718ad39-0000.us-south.containers.appdomain.cloud
Ingress Secret: nibz-development-dff43bc8701fcd5837d6de963718ad39-0000
Workers: 3
Worker Zones: dal12
Version: 1.18.2_1512
Creator: -
Monitoring Dashboard: -
Resource Group ID: 75e353d82014457991ec7cbac09854ea
Resource Group Name: Default
$ ibmcloud ks cluster config --cluster nibz-development
OK
The configuration for nibz-development was downloaded successfully.
Added context for nibz-development to the current kubeconfig file.
You can now execute 'kubectl' commands against your cluster. For example, run 'kubectl get nodes'.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
10.241.155.14 Ready <none> 4d23h v1.18.2+IKS 10.241.155.14 169.59.251.12 Ubuntu 18.04.4 LTS 4.15.0-99-generic containerd://1.3.4
10.241.155.17 Ready <none> 4d23h v1.18.2+IKS 10.241.155.17 169.59.251.13 Ubuntu 18.04.4 LTS 4.15.0-99-generic containerd://1.3.4
10.241.155.32 Ready <none> 4d23h v1.18.2+IKS 10.241.155.32 169.59.251.14 Ubuntu 18.04.4 LTS 4.15.0-99-generic containerd://1.3.4
The container runtime environment is containerd
.
Set up Helm
To install falco, you use the helm chart. If you don’t already have helm installed, see the following tutorial.
: Edit the values.yaml file
$ head values.yaml
# Default values for falco.
image:
registry: docker.io
repository: falcosecurity/falco
tag: 0.23.0
pullPolicy: IfNotPresent
docker:
enabled: true
This lays out what version of falco you will be using, in this case 0.23.0
. As this tutorial ages, you might try changing the tag to later versions or the master
tag. The values.yaml
file is our main entry point for configuration changes to the falco daemon, you can return to it often.
The only change to do on this is to disable Docker. You only use containerd support on the IBM Cloud so you want to disable Docker so that Kubernetes metadata is properly retrieved by the daemon. To do that set enabled: true
to enabled: false
in the docker section of the config
command, around line 9.
: Install falco using helm
$ helm install falco .
NAME: falco
LAST DEPLOYED: Tue May 26 19:42:59 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Falco agents are spinning up on each node in your cluster. After a few
seconds, they are going to start monitoring your containers looking for
security issues.
No further action should be required.
And validate:
$ k get pod
NAME READY STATUS RESTARTS AGE
falco-5lqs4 1/1 Running 0 4m47s
falco-lm2x2 1/1 Running 0 4m47s
falco-pwc5l 1/1 Running 0 4m47s
$ k logs falco-pwc5l
* Setting up /usr/src links from host
* Running falco-driver-loader with: driver=module, compile=yes, download=yes
* Unloading falco module, if present
* Trying to dkms install falco module
Kernel preparation unnecessary for this kernel. Skipping...
Building module:
cleaning build area....
make -j4 KERNELRELEASE=4.15.0-99-generic -C /lib/modules/4.15.0-99-generic/build M=/var/lib/dkms/falco/96bd9bc560f67742738eb7255aeb4d03046b8045/build...........
cleaning build area....
DKMS: build completed.
falco.ko:
Running module version sanity check.
depmod...
DKMS: install completed.
* falco module installed in dkms, trying to insmod
* Success: falco module found and loaded in dkms
Tue May 26 19:43:39 2020: Falco initialized with configuration file /etc/falco/falco.yaml
Tue May 26 19:43:39 2020: Loading rules from file /etc/falco/falco_rules.yaml:
Tue May 26 19:43:40 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
Tue May 26 19:43:42 2020: Starting internal webserver, listening on port 8765
Don’t worry if your output doesn’t exactly match the above. However you should see the pods go into ‘Running’ state and the logs should be free of errors and mention “Falco initialized…”.
During its first-run installation, Falco uses Dynamic Kernel Module Support (DKMS) to compile and install a kernel module, which is how Falco picks up the system calls.
: Inspect installation
Service Accounts
Falco uses a service account in Kubernetes to access the Kubernetes API. Falco needs that access to be able to tie security incidents to the relevant container. The helm chart sets up a common role-based access control triple approach: a ServiceAccount
, a ClusterRole
, and a ClusterRoleBinding
. The ClusterRole
has the information around what access is being given. If you change nothing in these files, the Falco daemon can only read and list, but not modify any object in the Kubernetes API.
$ ls templates/clusterrolebinding.yaml templates/service account.yaml templates/clusterrole.yaml
templates/clusterrolebinding.yaml templates/clusterrole.yaml templates/serviceaccount.yaml
$ k get clusterrole falco
NAME CREATED AT
falco 2020-05-26T19:43:00Z
$ k get serviceaccount falco
NAME SECRETS AGE
falco 1 7m54s
$ k get clusterrolebinding falco
NAME ROLE AGE
falco ClusterRole/falco 7m59s
You can inspect these resources deeper with the -o yaml
flag. Note that not all kubernetes resources are namespaced. clusterrole
and clusterrolebinding
are not namespaced but serviceaccount
is namespaced.
Daemonset
Falco runs a daemonset for the falco daemon itself. Daemonsets are a nice fit for falco as they ensure a single copy of the program per physical node.
$ k get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
falco 3 3 3 3 3 <none> 11m
ConfigMap
Falco’s configuration is split up into several files. falco.yaml
refers to configuration of the daemon’s particulars: output type, ports, etc. The other *_rules.yaml
files contain the checks that Falco fires against (shells being opened, files being modified, etc.). falco.yaml
will be templated out by the helm chart and the rules files will be copied directly. The daemonset mounts this configmap
under /etc/falco
.
# The template for falco.yaml:
$ head -n 25 templates/configmap.yaml | tail -n 15
falco.yaml: |-
# File(s) or Directories containing Falco rules, loaded at startup.
# The name "rules_file" is only for backwards compatibility.
# If the entry is a file, it will be read directly. If the entry is a directory,
# every file in that directory will be read, in alphabetical order.
#
# falco_rules.yaml ships with the falco package and is overridden with
# every new software version. falco_rules.local.yaml is only created
# if it doesn't exist. If you want to customize the set of rules, add
# your customizations to falco_rules.local.yaml.
#
# The files will be read in the order presented here, so make sure if
# you have overrides they appear in later files.
rules_file:
{{- range .Values.falco.rulesFile }}
# The rules files:
$ ls rules/
application_rules.yaml falco_rules.local.yaml falco_rules.yaml k8s_audit_rules.yaml
# Inspect in kubernetes
k get cm falco
NAME DATA AGE
falco 5 16m
Review the configuration for Falco
Now, take a peek at the configuration you set up for Falco.
: Examine the DaemonSet configuration
Run the following command to tell the DaemonSet to run with the service account and permissions you set up earlier:
$ kubectl get ds falco -o yaml | grep serviceAcc
serviceAccount: falco-account
Check out the serviceaccount for details. This configuration provides read access to almost everything in the Kubernetes API server.
You mount many important directories from the Kubernetes host into the Falco pod:
$ k get ds falco -o yaml | grep -A 11 volumes:
volumes:
- name: containerd-socket
hostPath:
path: /run/containerd/containerd.sock
- name: dev-fs
hostPath:
path: /dev
- name: proc-fs
hostPath:
path: /proc
- name: boot-fs
hostPath:
This step enables Falco to interact with the container runtime environment to pull container metadata (like the container name and underlying image name) and to query the host’s process table to discover process names. Also note that this example maps the containerd-socket
as well as the docker-socket
. This is because IBM Cloud Kubernetes Service uses containerd as the container runtime.
: Examine the Falco configuration files
Falco is configured by several YAML files that you set up via a ConfigMap. falco.yaml
configures server settings and falco_rules.yaml
contains rules for what to alert on and at what level.
$ ls rules/
application_rules.yaml falco_rules.local.yaml falco_rules.yaml k8s_audit_rules.yaml
: View a Falco rule
This rule watches for potentially nefarious Netcat commands and throws alerts when it sees them at the WARNING
level.
$ cat rules/falco_rules.yaml | grep -A 12 'Netcat Remote'
- rule: Netcat Remote Code Execution in Container
desc: Netcat Program runs inside container that allows remote code execution
condition: >
spawned_process and container and
((proc.name = "nc" and (proc.args contains "-e" or proc.args contains "-c")) or
(proc.name = "ncat" and (proc.args contains "--sh-exec" or proc.args contains "--exec"))
)
output: >
Netcat runs inside container that allows remote code execution (user=%user.name
command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
priority: WARNING
tags: [network, process]
Watch Falco in action
Now you can see Falco in action. You tail the logs in one terminal, and then synthetically create some events in the other terminal and watch the events come through the logs.
: Tail the logs in the first terminal
Run the following command:
$ k get pod
NAME READY STATUS RESTARTS AGE
falco-5lqs4 1/1 Running 0 23m
falco-lm2x2 1/1 Running 0 23m
falco-pwc5l 1/1 Running 0 23m
$ k logs -f falco-pwc5l
* Setting up /usr/src links from host
* Running falco-driver-loader with: driver=module, compile=yes, download=yes
* Unloading falco module, if present
* Trying to dkms install falco module
Kernel preparation unnecessary for this kernel. Skipping...
Building module:
cleaning build area....
make -j4 KERNELRELEASE=4.15.0-99-generic -C /lib/modules/4.15.0-99-generic/build M=/var/lib/dkms/falco/96bd9bc560f67742738eb7255aeb4d03046b8045/build...........
cleaning build area....
DKMS: build completed.
falco.ko:
Running module version sanity check.
depmod...
DKMS: install completed.
* falco module installed in dkms, trying to insmod
* Success: falco module found and loaded in dkms
Tue May 26 19:43:39 2020: Falco initialized with configuration file /etc/falco/falco.yaml
Tue May 26 19:43:39 2020: Loading rules from file /etc/falco/falco_rules.yaml:
Tue May 26 19:43:40 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
Tue May 26 19:43:42 2020: Starting internal webserver, listening on port 8765
19:43:41.617274000: Notice Container with sensitive mount started (user=root command=container:1dab04047700 k8s.ns=default k8s.pod=falco-pwc5l container=1dab04047700 image=docker.io/falcosecurity/falco:0.23.0 mounts=/var/run/docker.sock:/host/var/run/docker.sock::true:private,/run/containerd/containerd.sock:/host/run/containerd/containerd.sock::true:private,/dev:/host/dev::false:private,/proc:/host/proc::false:private,/boot:/host/boot::false:private,/lib/modules:/host/lib/modules::true:private,/usr:/host/usr::false:private,/var/data/kubelet/pods/cae458b5-8f6e-4dac-8a44-cfbddbeb8a61/volumes/kubernetes.io~empty-dir/dshm:/dev/shm::true:private,/etc:/host/etc::false:private,/var/data/kubelet/pods/cae458b5-8f6e-4dac-8a44-cfbddbeb8a61/volumes/kubernetes.io~configmap/config-volume:/etc/falco::false:private,/var/data/kubelet/pods/cae458b5-8f6e-4dac-8a44-cfbddbeb8a61/volumes/kubernetes.io~secret/falco-token-v9x4v:/var/run/secrets/kubernetes.io/serviceaccount::false:private,/var/data/kubelet/pods/cae458b5-8f6e-4dac-8a44-cfbddbeb8a61/etc-hosts:/etc/hosts::true:private,/var/data/kubelet/pods/cae458b5-8f6e-4dac-8a44-cfbddbeb8a61/containers/falco/0ae053db:/dev/termination-log::true:private) k8s.ns=default k8s.pod=falco-pwc5l container=1dab04047700
: Create two security events in a second terminal
$ k get pod
NAME READY STATUS RESTARTS AGE
falco-5lqs4 1/1 Running 0 23m
falco-lm2x2 1/1 Running 0 23m
falco-pwc5l 1/1 Running 0 23m
$ k exec -it falco-pwc5l /bin/bash
root@falco-pwc5l:/# echo "I'm in!"
I'm in!
root@falco-pwc5l:/# cat /etc/shadow > /dev/null
root@falco-pwc5l:/#
In the first terminal you can see the events:
20:07:06.837415779: Notice A shell was spawned in a container with an attached terminal (user=root k8s.ns=default k8s.pod=falco-pwc5l container=1dab04047700 shell=bash parent=runc cmdline=bash terminal=34816 container_id=1dab04047700 image=docker.io/falcosecurity/falco) k8s.ns=default k8s.pod=falco-pwc5l container=1dab04047700
20:07:33.395518344: Warning Sensitive file opened for reading by non-trusted program (user=root program=cat command=cat /etc/shadow file=/etc/shadow parent=bash gparent=<NA> ggparent=<NA> gggparent=<NA> container_id=1dab04047700 image=docker.io/falcosecurity/falco) k8s.ns=default k8s.pod=falco-pwc5l container=1dab04047700 k8s.ns=default k8s.pod=falco-pwc5l container=1dab04047700
You can see interesting details about the security event in the logs. However, there is a more structured way to get the logs out. Let’s explore that now.
Use json output and process events with jq
Modify the helm chart to make Falco output logs in json mode
vim values.yaml
Change jsonOutput: false
to jsonOutput: true
on line 108.
Use helm to pick up changes
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
falco default 1 2020-05-26 19:42:59.122742141 +0000 UTC deployed falco-1.1.8 0.23.0
$ helm upgrade falco .
Release "falco" has been upgraded. Happy Helming!
NAME: falco
LAST DEPLOYED: Tue May 26 20:18:31 2020
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
Falco agents are spinning up on each node in your cluster. After a few
seconds, they are going to start monitoring your containers looking for
security issues.
No further action should be required.
If you’re fast, you’ll be able to see the pods restarting:
k get pod
NAME READY STATUS RESTARTS AGE
falco-bs6rw 1/1 Running 0 6s
falco-lm2x2 0/1 Terminating 0 35m
falco-pwc5l 1/1 Running 0 35m
After, you can repeat the earlier procedure to generate a security event. You’ll de a result like this, in json.
{"output":"20:20:00.598526480: Notice A shell was spawned in a container with an attached terminal (user=root k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e shell=bash parent=runc cmdline=bash terminal=34816 container_id=fc8aefdf0c4e image=docker.io/falcosecurity/falco) k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e","priority":"Notice","rule":"Terminal shell in container","time":"2020-05-26T20:20:00.598526480Z", "output_fields": {"container.id":"fc8aefdf0c4e","container.image.repository":"docker.io/falcosecurity/falco","evt.time":1590524400598526480,"k8s.ns.name":"default","k8s.pod.name":"falco-5tjrp","proc.cmdline":"bash","proc.name":"bash","proc.pname":"runc","proc.tty":34816,"user.name":"root"}}
When you process the event with jq, Falco gives useful information about the security event and the full Kubernetes context for the event, such as pod name and namespace:
$ echo '{"output":"20:20:00.598526480: Notice A shell was spawned in a container with an attached terminal (user=root k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e shell=bash parent=runc cmdline=bash terminal=34816 container_id=fc8aefdf0c4e image=docker.io/falcosecurity/falco) k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e","priority":"Notice","rule":"Terminal shell in container","time":"2020-05-26T20:20:00.598526480Z", "output_fields": {"container.id":"fc8aefdf0c4e","container.image.repository":"docker.io/falcosecurity/falco","evt.time":1590524400598526480,"k8s.ns.name":"default","k8s.pod.name":"falco-5tjrp","proc.cmdline":"bash","proc.name":"bash","proc.pname":"runc","proc.tty":34816,"user.name":"root"}}' | jq '.'
{
"output": "20:20:00.598526480: Notice A shell was spawned in a container with an attached terminal (user=root k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e shell=bash parent=runc cmdline=bash terminal=34816 container_id=fc8aefdf0c4e image=docker.io/falcosecurity/falco) k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e",
"priority": "Notice",
"rule": "Terminal shell in container",
"time": "2020-05-26T20:20:00.598526480Z",
"output_fields": {
"container.id": "fc8aefdf0c4e",
"container.image.repository": "docker.io/falcosecurity/falco",
"evt.time": 1590524400598526500,
"k8s.ns.name": "default",
"k8s.pod.name": "falco-5tjrp",
"proc.cmdline": "bash",
"proc.name": "bash",
"proc.pname": "runc",
"proc.tty": 34816,
"user.name": "root"
}
}
Now you can trigger the Netcat rule you displayed earlier:
root@falco-daemonset-99p8j:/# nc -l 4444
^C
kubectl logs falco-daemonset-99p8j
...
{"output":"20:28:20.374390553: Notice Network tool launched in container (user=root command=nc -l 4444 parent_process=bash container_id=fc8aefdf0c4e container_name=falco image=docker.io/falcosecurity/falco:0.23.0) k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e","priority":"Notice","rule":"Launch Suspicious Network Tool in Container","time":"2020-05-26T20:28:20.374390553Z", "output_fields": {"container.id":"fc8aefdf0c4e","container.image.repository":"docker.io/falcosecurity/falco","container.image.tag":"0.23.0","container.name":"falco","evt.time":1590524900374390553,"k8s.ns.name":"default","k8s.pod.name":"falco-5tjrp","proc.cmdline":"nc -l 4444","proc.pname":"bash","user.name":"root"}}
...
$ echo '{"output":"20:28:20.374390553: Notice Network tool launched in container (user=root command=nc -l 4444 parent_process=bash container_id=fc8aefdf0c4e container_name=falco image=docker.io/falcosecurity/falco:0.23.0) k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e","priority":"Notice","rule":"Launch Suspicious Network Tool in Container","time":"2020-05-26T20:28:20.374390553Z", "output_fields": {"container.id":"fc8aefdf0c4e","container.image.repository":"docker.io/falcosecurity/falco","container.image.tag":"0.23.0","container.name":"falco","evt.time":1590524900374390553,"k8s.ns.name":"default","k8s.pod.name":"falco-5tjrp","proc.cmdline":"nc -l 4444","proc.pname":"bash","user.name":"root"}}' | jq '.'
{
"output": "20:28:20.374390553: Notice Network tool launched in container (user=root command=nc -l 4444 parent_process=bash container_id=fc8aefdf0c4e container_name=falco image=docker.io/falcosecurity/falco:0.23.0) k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e k8s.ns=default k8s.pod=falco-5tjrp container=fc8aefdf0c4e",
"priority": "Notice",
"rule": "Launch Suspicious Network Tool in Container",
"time": "2020-05-26T20:28:20.374390553Z",
"output_fields": {
"container.id": "fc8aefdf0c4e",
"container.image.repository": "docker.io/falcosecurity/falco",
"container.image.tag": "0.23.0",
"container.name": "falco",
"evt.time": 1590524900374390500,
"k8s.ns.name": "default",
"k8s.pod.name": "falco-5tjrp",
"proc.cmdline": "nc -l 4444",
"proc.pname": "bash",
"user.name": "root"
}
}
You saw what kind of events Falco can discover and a bit of how to configure them.
Set up LogDNA
Tailing logs and parsing through jq
is getting old. Let’s push all the logs into a central location by utilizing LogDNA on the IBM Cloud.
Set up “IBM Log Analysis with Log DNA” by following this tutorial
You’ll start shipping your logs to Log DNA with the following command (or similar):
$ kubectl create secret generic logdna-agent-key --from-literal=logdna-agent-key=bed7b0e4234c4628b14fa9b43e948054
secret/logdna-agent-key created
$ kubectl create -f https://assets.us-south.logging.cloud.ibm.com/clients/logdna-agent-ds.yaml
daemonset.apps/logdna-agent created
$ k get pod | grep logdna
logdna-agent-5rsv2 1/1 Running 0 77s
logdna-agent-7b4sr 1/1 Running 0 77s
logdna-agent-d2rht 1/1 Running 0 77s
Now you can view your logs in the Log DNA Web interface:
By repeating the security event procedure above you can generate some specific falco logs. Limit your search view to just falco events by searching for falco at the bottom. By clicking on an individual log line you can see Log DNA pick out and render fields. Note that on the free tier of LogDNA it won’t search past history but will show new events that match the search.
You can add additional events by using the falco event-generator daemon.
Send alerts to a Slack channel
You can highlight specific events by sending them to the slack channel. Falco has a few different things you can do with alerts, but this tutorial shows how to send alerts to a Slack channel. The most flexible way to send alerts is using the falco sidekick program.
Get the sidekick helm chart (in the same repo as the main falco chart)
$ git clone https://github.com/falcosecurity/charts/ Cloning into 'charts'... remote: Enumerating objects: 256, done. remote: Counting objects: 100% (256/256), done. remote: Compressing objects: 100% (139/139), done. remote: Total 1109 (delta 141), reused 172 (delta 82), pack-reused 853 Receiving objects: 100% (1109/1109), 1.10 MiB | 6.84 MiB/s, done. Resolving deltas: 100% (663/663), done.
cd charts/falcosidekick
Create slack webhook
See how to create an incoming webhook in Slack at api.slack.com/incoming-webhooks.
Configure the sidekick
values.yaml
Add your slack webhook url to the
webhookurl
field on line 26. This is all you have to do.Install sidekick with helm
$ helm install sidekick . NAME: sidekick LAST DEPLOYED: Tue May 26 21:43:55 2020 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=falcosidekick,app.kubernetes.io/instance=sidekick" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:80
Modify falco’s
values.yaml
to configure falco to send events to the sidekick# (line 228 ish) httpOutput: enabled: true url: http://sidekick-falcosidekick:2801/
Create another security incident, and then you see a security alert posted to Slack:
Summary
You can do a lot more with Falco, including hooking up Falco events to serverless computing platforms such as OpenWhisk and Knative. Hopefully, this introduction gave you some basic information that helps you get started with your next project.
Acknowledgements
I send a big thanks to Michael Ducy (@mfdii) and the Falco Open Source team for helping me get this tutorial working.
Miscelaneous
If you are using OpenShift, set up the OpenShift Security Context Constraints for the account you just created.
$ oc adm policy add-scc-to-user privileged -z falco-account
Share our content
-
- Estimated time
- Prerequisites
- Set up Helm
- Review the configuration for Falco
- 1: Examine the DaemonSet configuration
- 2: Examine the Falco configuration files
- 3: View a Falco rule
- Watch Falco in action
- Use json output and process events with jq
- Set up LogDNA
- Send alerts to a Slack channel
- Summary
-
- Falco Helm Chart Helm charts for using Falco supported by the open source community
- Falco alerts Falco can send alerts to one or more channels.
- Send data into Slack in real time. Incoming webhooks are a simple way to post messages from apps into Slack.