Skip to content

Admin Guide

JVM Settings

Java 17 is the recommended version in this version of Padas. Java 11 and later versions are also supported. From a security perspective, we recommend the latest released patch version as older freely available versions may have disclosed security vulnerabilities.

For more information regarding Confluent Platform, please visit here.

NOTE: You need to separately install the correct version of Java before you start the installation process.

JVM Heap Options can be set via PADAS_HEAP_OPTS environment variable. Default value is: -Xmx1G -Xms1G

NOTE: When using systemctl to start the service, you'll need to edit padas.service file to make a change in JVM heap options.


Topic Properties

This section is only applicable if padas.config.store=kafka is set in padas.properties file. The following Kafka topics must be created for keeping centralized configuration entries. You can create these topics according to your preference (e.g. Padas UI, Confluent Control Center) and below steps simply provide one way of doing so.

NOTE: While it's possible to create these topics either via REST API or from Padas UI, it is highly recommended to review Topic Configuration and tune settings for each Padas topic (specially for partitions and replication_factor) according to expected volume and performance requirements.

NOTE: All required topics must enable log compaction since they keep relevant configuration entries. Proper retention policy should be implemented in order to avoid any loss of configuration.

Topic Name Description Kafka Settings
padas_nodes Up-to-date list of registered Padas Engine instances. cleanup.policy: compact
retention.bytes: -1
padas_tasks List of transformation and apply tasks. cleanup.policy: compact
retention.bytes: -1
padas_pipelines List of pipelines that contain task information. cleanup.policy: compact
retention.bytes: -1
padas_topologies List of topologies that contain pipeline information. cleanup.policy: compact
retention.bytes: -1
padas_rules List of rules to be utilized by APPLY_RULES task. cleanup.policy: compact
retention.bytes: -1
padas_lookups List of lookup files for data enrichment. cleanup.policy: compact
retention.bytes: -1

Configuration Properties

For any PADAS instance all configuration is read from $PADAS_HOME/etc/padas.properties file; and details regarding the properties settings can be found in Configuration File Reference, also available with any installation at $PADAS_HOME/etc/padas.properties.spec


Logging

Padas Engine utilizes Logback for logging application activity. By default, $PADAS_HOME/etc/logback.xml file is used; log files are created based on the following settings and can be changed according to your requirements.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <!-- Stop output INFO at start -->
    <statusListener class="ch.qos.logback.core.status.NopStatusListener" />

    <property name="LOGS" value="${PADAS_HOME}/logs" />

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%yellow(%d{yyyy-MM-dd HH:mm:ss.SSS}) %cyan(${HOSTNAME}) %magenta([%thread]) %highlight(%-5level) %logger{36}.%M - %msg%n</pattern>
        </encoder>
    </appender>
    <appender name="DISPLAY" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%highlight(%-5level) %msg%n</pattern>
        </encoder>
    </appender>
    <appender name="FILE-ROLLING" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOGS}/padas.log</file>

        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <fileNamePattern>${LOGS}/padas.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
            <!-- each archived file, size max 100MB -->
            <maxFileSize>100MB</maxFileSize>
            <!-- total size of all archive files, if total size > 20GB, it will delete old archived file -->
            <totalSizeCap>20GB</totalSizeCap>
            <!-- 60 days to keep -->
            <maxHistory>60</maxHistory>
        </rollingPolicy>

        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} ${HOSTNAME} [%thread] %-5level %logger{36}.%M - %msg%n</pattern>
        </encoder>
    </appender>
    <logger name="ch.qos.logback" level="WARN" />
    <logger name="org.springframework" level="WARN" />
    <logger name="org.apache" level="WARN" />
    <logger name="io.confluent" level="WARN" />
    <logger name="io.padas" level="INFO">
        <!--<appender-ref ref="STDOUT" />-->
    </logger>
    <logger name="io.padas.app.management.Manager" level="INFO">
        <appender-ref ref="DISPLAY" />
    </logger>
    <logger name="io.padas.app.App" level="INFO">
        <appender-ref ref="DISPLAY" />
    </logger>
    <root level="info">
        <appender-ref ref="FILE-ROLLING" />
    </root>
</configuration>

Integrate to External Systems

It is possible to integrate any external system either as a Kafka Producer (source, generating and ingesting event data) or Kafka Consumer (sink, consuming padas_alerts topic for further analysis/alerting). Confluent Hub can be utilized to implement any specific source and/or sink connector for integration.

Winlogbeat (Elastic Stack)

Winlogbeat (OSS) can be utilized as a Kafka Producer to ingest Windows event data. You can find relevant example information below.

Winlogbeat examples: - Sample Sysmon Config with Winlogbeat: This example sysmon configuration is based on Swift On Security sysmon config and it focuses on default high-quality event tracing while excluding any Winlogbeat generated activity from event logs.

  • Winlogbeat configuration (winlogbeat.yml): This is an example winlogbeat configuration that reads both Security and Sysmon event logs on the installed Windows system and sends events to relevant Kafka topics (i.e. winlogbeat-sysmon and winlogbeat-security).

PADAS configurations:

  • Winlogbeat Sysmon Transformation: This is a set of configuration items (Tasks) that convert Winlogbeat Sysmon formatted data to Padas Datamodel so that any pertinent rule can be applied, such as Apply Rules Configuration task (before using Apply Rules Configuration task, add this rule set from Rules).

  • Out-of-the-box PADAS Rules: This sample JSON configuration contains MITRE ATT&CK relevant rules, which are tested and verified with the above example configurations. You can upload this file via Rules view to quickly get started. For any other input, it's recommended to perform transformations to match the applicable data model and PDL query to achieve standardization.

Splunk

Splunk can act as a Kafka Consumer for further analysis of Padas Alerts (populated via APPLY_RULES task function) or any other topic. Padas and Splunk integration can be accomplished seamlessly with Splunk Sink Connector and alerts can utilize Technology Add-on for Padas. Splunk Sink Connector needs to be installed on Confluent Kafka and TA-Padas will need to be installed on Splunk Search Head(s). Please follow the instructions within the links on how to properly install.

An example configuration for Splunk Sink Connector can be found here: splunk-sink-connector-example.json

{
  "name": "SplunkSinkConnectorConnector_Padas",
  "config": {
    "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector",
    "value.converter": "org.apache.kafka.connect.storage.StringConverter",
    "topics": "padas_alerts",
    "splunk.hec.token": "e8de5f0e-97b1-4485-b416-9391cbf89392",
    "splunk.hec.uri": "https://splunk-server:8088",
    "splunk.indexes": "padas",
    "splunk.sourcetypes": "padas:alert",
    "splunk.hec.ssl.validate.certs": "false"
  }
}

If the Splunk installation has MITRE ATT&CK App for Splunk, then any alert with MITRE ATT&CK annotations are automatically integrated also. Please refer to app documentation for details.