Logging Service¶
Fit in NEANIAS Ecosystem¶
The NEANIAS Logging service provides an aggregation functionality through which NEANIAS services accumulate logs that are generated in a distributed fashion in a single centralized repository through well-defined endpoints and library utilities facilitating the transformation, enrichment and indexing of the log entries.
The central repository can be searched and further aggregated to produce meaningful traces of user activity and facilitate troubleshooting and action auditing.
Technology¶
The NEANIAS Logging Service is backed by an ELK stack. The ELK stack consists of
- ElasticSearch (https://www.elastic.co/elasticsearch/) index backend that stores data and makes them searchable
- Logstash (https://www.elastic.co/logstash/) that facilitates the transformation of log entries extraction of additional metadata and homogenization of logged entries
- Kibana (https://www.elastic.co/kibana) that offers visualization, aggregation and filtering graphical user interface over the indexed entries
Log Transfer¶
To facilitate the aggregation of log, the Beats (https://www.elastic.co/beats/) framework for data shipping will be used. Specifically, beats modules for either file or http transfer can be provided with supportive configuration to lower the integration barrier for all NEANIAS Service Providers.
Logged information¶
Log Templates¶
Templates for log entry formats are provided along with the respective Logstash transformation templates that will extract a limited common set of information. This way, a homogenized set of information is made available for log browsing and aggregation operations.
Currently supporting templates include:
- Text log based
- JSON log based
Subsequently, more templates will be supported and extended set of properties and fields can be added to the templates so as to better capture available information. This operation is expected to take place in coordination with service providers.
It is important to note that the timestamp and serviceid fields are mandatory in all log templates and need to be provided.
Examples of the currently supported log formats are:
Text¶
2020-09-01T13:00:00Z,ServiceId,Level,Text,Thread,Status,SubStatus,MsgId,ParentMsgId,Resource,Agent,HostAddr,HostEndPoint,HostMethod,ActionClass,Action,SubAction,UserId,UserDelegate,10.5,RefUrl,”Message asdasd “adasd “,”Help”,”ExceptionType”, “ExceptionMessage”, “ExceptionStack”
JSON¶
{
"timestamp": "2020-09-01T13:00:00Z",
"serviceid": "ServiceId",
"level": "Error",
"context": "Json",
"thread": "Thread",
"status": "Status",
"substatus": "SubStatus",
"msgid": "MsgId",
"parentmsgid": "ParentMsgId",
"resource": "Resource",
"agent": "Agent",
"clientaddr": "ClientAddr",
"hostaddr": "HostAddr",
"hostendpoint": "HostEndPoint",
"hostmethod": "HostMethod",
"actionclass": "ActionClass",
"action": "Action",
"subaction": "SubAction",
"userid": "UserId",
"userdelegate": "UserDelegate",
"message": "Message",
"help": "Help",
"duration": 5,
"refurl": "RefUrl",
"exceptiontype": "ExceptionType",
"exceptionmessage": "ExceptionMessage",
"exceptionstack": "ExceptionStack"
}
Simpletextlog¶
2020-09-01T13:00:00Z Level Context Message more data
serilogcompact¶
{
"@t": "2020-09-01T13:00:00Z",
"@l": "Error",
"@m": "Message",
"@x": "ExceptionMessage"
}
Log Model¶
Aggregating the logged information, based on the template each logn entry adhers to, a series of transformations take place to extract information that comply to the overall superset of information supported by the Logging Service in order to index the log entries appropriately. Any information contained in the log entry that is not directly mapped to one od the log model fields, is still maintained and is available for user retrieval.
Date
DateTime (ISO 8601) Mandatory Date of the entry
ServiceId
Text Mandatory Global unique identifier for the service that the log entry refers to. It may be generated by the probe rather than be harvested from the logs.
Level
ENUM The level of the entry: (D)Debug info, (I)Information (Default), (S)Supplementary Information, (W)Warning, (E)Error, (F)Fatal Error
Context
Text Global identifier of the interaction context that this message belongs to (normally passed by a caller)
Thread
Text Local identifier of the iteraction context that that this message belongs to (generated by the service to group together messages of a single thread of execution).
Status
Text Identifier of the result (e.g. Error number), most preferably conforming to HTTP results
SubStatus
Text Fine grained local result identifier.
MsgId
Text Unique identifier of the message preferably in orderable form
ParentMsgId
Text Reference to a message that this message may complement.
Resource
Text The resource handled by the call.
Agent
Text Description of the Agent
ClientAddr
Text Address of the Agent
HostAddr
Text Address of the Host
HostEndPoint
Text The endpoint invoked
HostMethod
Text The method or verb engaged.
ActionClass
Enum A global list of Action classes: (A)Access (default)
Action
Text The action identifier in internal semantics.
SubAction
Text An internal sub-action identifier.
UserId
Text An identifier for the user on whos purpose the work is performed
UserDelegate
Text An identifier for the user who is the delegate to perform the work
Message
Text The text of the message
Help
Text Supplementary text of the message
RefUrl
Text Link to information
Duration
Long The duration of the action in ticks
ExceptionType
Text The exception type
ExceptionMessage
Text The exception message
ExceptionStack
Text The exception stack
Log Exploration¶
A Kibana dashboard is provided that offers browsing, filtering and visualization capabilities. Access to the Kibana graphical application is subject to authorization enforced by relevant access management plugins. Integration with the NEANIAS AAI is evaluated and planned in order to facilitate some degree of global policy enforcement through the central AAI service
Pushing Log Entries to the Aggregator¶
In this section some examples are given on
- Transforming log entries prior to pushing them to the log aggregator
- Connecting to the development log aggregation service
- Connecting to the production log aggregation service
Client Side Transformation¶
It is possible for the integrating service to perform client side transformation in the log entries shipped through the filebeet. The following filebeat.yml example showcases some of these transformations. For more detailed explanations and additional capabilities, the filebeat reference should be conulted.
filebeat.inputs:
- type: log
paths:
- /usr/share/filebeat/log_data/***
tags: ["json"]
enabled: true
reload.enabled: true
reload.period: 10s
include_lines: ['"EventId"\s*:\s*{\s*"Id"\s*:\s*3003']
processors:
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 4
target: "json"
overwrite_keys: false
add_error_key: true
- rename:
fields:
- from: "json.@t"
to: "data.timestamp"
- from: "json.@mt.d.usr.sub"
to: "data.userid"
- add_fields:
target: data
fields:
level: 'accounting'
value: 1
measure: 'unit'
- drop_fields:
fields: ["message"]
- convert:
fields:
- {from: "data", to: "message", type: "string"}
- drop_fields:
fields: ["json", "data"]
output.console:
pretty: true
Development¶
A logstash service with beat input is available for pushing the log data loggingaggregator.dev.neanias.eu:31314. Clients can use filebeat for crawling and pushing log data
Filebeat configuration example:
- The tags should be set for recognition of log format, json (json format), text (text format), simpletextlog (Simpletextlog format).
- The cerificates should be set for authentication
filebeat.inputs:
- type: log
paths:
- /usr/share/filebeat/log_data/*.json
tags: ["json"]
enabled: true
reload.enabled: true
reload.period: 10s
- type: log
paths:
- /usr/share/filebeat/log_data/*.csv
tags: ["text"]
enabled: true
reload.enabled: true
reload.period: 10s
output.logstash:
hosts: ["loggingaggregator.dev.neanias.eu:31314"]
bulk_max_size: 10
ssl.certificate_authorities: ["/etc/filebeat/ca/root-ca.pem"]
ssl.certificate: "/etc/filebeat/certificates/service01.pem"
ssl.key: "/etc/filebeat/certificates/service01-key.pem"
Production¶
A logstash service with beat input is available for pushing the log data loggingaggregator.neanias.eu:31314. Clients can use filebeat for crawling and pushing log data
Filebeat configuration example:
- The tags should be set for recognition of log format, json (json format), text (text format), simpletextlog (Simpletextlog format).
- The cerificates should be set for authentication
filebeat.inputs:
- type: log
paths:
- /usr/share/filebeat/log_data/*.json
tags: ["json"]
enabled: true
reload.enabled: true
reload.period: 10s
- type: log
paths:
- /usr/share/filebeat/log_data/*.csv
tags: ["text"]
enabled: true
reload.enabled: true
reload.period: 10s
output.logstash:
hosts: ["loggingaggregator.neanias.eu:31314"]
bulk_max_size: 10
ssl.certificate_authorities: ["/etc/filebeat/ca/root-ca.pem"]
ssl.certificate: "/etc/filebeat/certificates/service01.pem"
ssl.key: "/etc/filebeat/certificates/service01-key.pem"