Audit User Activity in the System

For a company to remain agile, engineers have access to multiple servers across various environments. This helps people be more independent and reduce dependencies on other teams. While this is helpful, it is important to have checks and controls in place that will prevent people from abusing this. At Haptik, to control this we wanted to monitor all user activity on all servers. We also log all important activity happening on the platform. For that purpose, we set up a pipeline to collect logs and push them on to a common dashboard for auditing purposes. I will discuss a simple pipeline with you all in this blog.

Types of Activity Logs

  1. User activity logs on servers (SSH and initiated commands, files edited, etc.)
  2. User activity on our Bot Builder Platform (Who edited what)
  3. User activity inside Python Shell (IPyhton logs)

Technologies Used

Ansible [v2.7]

Ansible is an open-source software provisioning, configuration management, and application deployment tool.
It just requires that systems have Python (on Linux servers) and SSH.

Filebeat

Filebeat is a lightweight shipper for forwarding and centralizing log data.
Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify collects log events and forwards them to either to Elasticsearch or Logstash for indexing.

R-ELK Stack [v6.x]

R-ELK is the acronym for three open source projects: Redis, Elasticsearch, Logstash, and Kibana.
“Redis” is used as a buffer in the ELK stack.
“Elasticsearch” is a search and analytics engine.
“Logstash” is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch.
“Kibana” lets users visualize data with charts and graphs in Elasticsearch.
ELK stack setup steps are present here.

Pipeline


As shown in the above diagram, all the user activity data is collected and push to our central logging ELK server. Creating users for servers is controlled through Ansible and a Jenkins job that helps us do that. We also, use OpsWorks to create users on AWS machines but wanted to use a solution that is more cloud agnostic. For our platform, we have a separate permissions model. What the user does on the platform, we try to push some important types of the user activity log and helps us audit who did what. (Who made changes to what bot, some of this is still WIP)

Steps to Setup

1. User Creation on Servers

We use Ansible to manage users on all our VMs (servers). All the users that we create are added to the developer’s group. 

The above is available as a Jenkins job. We use it to create any user and give him/her ssh access to specified servers. The inventory file & the permissions file is maintained by us on a different reliable data store. We can control what type of access the users will have on the servers.

2. Bash History Setup

Following are the steps to consolidate bash history for all the users into a single file :

Edit the system-wide BASH runtime config file:

Append the following command to the end of that file :

Set up rsyslog based logging with a new file:

Contents:

Restart rsyslog:

Configure log rotation for the new file:

Append the following to the end of that file:

3. auditd Setup

Set up auditd on all the servers:

Configure auditd rules

Contents: Sample conf here.

Restart auditd

Some more about auditing Linux servers here.

4. Filebeat Setup

Configure filebeat

  • Python shell logs are set up separately via Code. We use iPython settings to create the logs for shell sessions, write it to a file and then Filebeat pushes the logs to the ELK server.
  • audit logs for the application/bot platform are also written to a file on the host machines which we again push to the same ELK stack. That has a separate Filebeat which runs on each and every application server.

Restart Filebeat:

5. Logstash Setup

We are going to leverage Logstash from the ELK server. Following are the steps to configure Logstash for audit logs :

Create a new configuration file:

Contents:

As per the requirement, you can add different indexes for different data. For more understanding of output/input visit Elastic’s website.

6. Viewing it on Kibana Dashboard

Below are the steps to view logs:

  • Go to ELK server’s Kibaba URL.
  • All the logs are pushed to audit-* index on ES
  • All auditd logs are sent with type: auditd
  • All Bash command history logs are sent with type: commands
  • All Python shell logs are sent with type: shell
  • All SSH auth logs are sent with type: auth

Hope this helps you set up a similar pipeline for you as well. This is a very high-level overview of a data pipeline to collect user activity log. We have internally put more audits in place which we will share soon. You can even push these logs to archived long-term storage like AWS S3, Azure Blob Storage, etc. and this data can be fetched as and when required.
We are hiring. Please visit our careers page and let us know how you are going about maintaining security practices at your company. We will soon be back with a much detailed blog around what other practices we are following. 

Share
Written by: