Log Files and Log Management Best Practices in Linux

Table of contents

No heading

No headings in the article.

Logs are records of events or sequences of events in a server or within a network. Logs typically consist of date, time, identities, processes, addresses, and inbound and outbound traffic, all depending on what the server does and the type of log being recorded.

Logs enumerate information relating to a server, or database to help in troubleshooting operational and availability problems. Analyzing log files is the first step in issue tracking by system administrators. Most systems do not generate logs by default and this answers why it is very important to ensure that logs are enabled during system configurations.

Log reviews show suspicious system activity acting as a red flag when something bad is happening. Reviewing logs regularly could help identify malicious attacks on your system. Due to the large amount of log data generated by systems, it is almost impossible to review them manually and this is why it is important to use automated log monitoring software like Splunk, and IBM Q Rader, to keep track of events.

Log files in Linux server are divided into 4 categories

  • Application Logs - To ensure businesses run smoothly and securely, application log files, record and save information of software application events. Using this information from the application log file, Tech teams can access, and analyze threats, investigate the cause of service outages, view warnings on storage space, troubleshoot errors, get information on debug needs, and analyze security incidents before they become chaotic to business operations.

  • Event Logs - This is a log file that keeps records about network usage, operations with operating systems, applications or devices, etc. Event logs capture different information especially as they relate to successful and unsuccessful logins, sessions, etc. this log information enables IT, teams, to gather information for network management to stay alerted on security.

  • Service Logs - A service log is a log file that records information about the resources or services within an operating system or network. When services like apache web server, Nginx, and file sharing services, are enabled, started, or disabled, hamper performance due to errors, these events leave footprints of information on the service log files to allow analysis, optimization, and troubleshooting of services. $ journalctl is a command in Unix-like operating systems to view service logs. $ journalctl –help or $** man journalctl** to view more information and flags to query the systemd journal.

  • System Logs (Syslog) - a record of operating system events. It includes startup messages, system changes, unexpected shutdowns, errors and warnings, and other important processes. Linux OS stores system logs in plain text in the /var/log directory. This file contains logs recorded for the system.

The following domains are peculiar with system logs :

  1. ** Web server logs** - A log that contains all activity related to the web servers running in an operating system over a defined period. **$ cd apache2 **or $ cd nginx. Use $ ls to view the list and $ cat command to view the log of your choice.

  2. ** Boot logs** - A log of everything that loaded or happened during the boot process in the operating systems. $ cat boot.log.

  3. Package managers logs - A log of all the activities of package managers - a collection of software tools that automates the process of installing, upgrading, and configuring all software within an OS. There are independent logs for pkg, apt, yum, and DNF package managers depending on the distro. $ cat dpkg.log or $ cd apt and $ ls to see the list of logs within the directory then use the cat command to view each log file.

  4. ** Kernel logs** - This log file records all kernel information including the services, applications, services and events running on the kernel. $ cat kern.log

  5. Authentication logs - This log keeps information on the authentication flow including authentication type, username, password use, failed authentication, successful authentication, new login events, username or password change, etc. $ cat auth.log

  6. Daemon logs - Daemons are programs that run on the system background without user interactions or triggers, its purpose is to keep the system highly functional to perform its operations. The daemon log file keeps that of all these programs. View with **$ cat daemon.log **

  7. Debug - This log file keeps a record of issues for troubleshooting and investigations. $ cat debug

  8. Users logs - This file keeps a record of critical actions carried out by users on their accounts or sessions through recognition of events related to user activity often to help IT teams to understand what users are doing on the systems and their level of computing resource consumption. $ cat user.log

  9. Xorg - Xorg is a popular display server among UNIX-like systems. Xorg file records all error messages thrown out by the server. $ cat xorg.log

  10. Messages - This log file records all global operating system activity, including startup messages, general messaging, system-related information, emails, etc. $ cat messages

  11. Database logs - files that record logs for database operations. Depending on the database management system installed in your operating system, $ cd MySQL or $ cd PostgreSQL, use $ ls to list the files and $ cat to view the content of each log file.

Other log files in Linux - alternatives.logs, btmp, lastlog, syslog, wtmp, etc.

Troubleshooting with Linux System through the log files

$ journalctl shows an output of logs in a server, also showing details of successful and unsuccessful services, why and when a service, process or server failed to start. Through the success status, you can have a clue as to the cause of the issues to help you troubleshoot as needed.

Log Management Best Practices

Review the logs to know why such information appeared in the log files, how often it appears and the information print of such events.

It is for optimum usage and management of log events that led to the invention of SIEM tools and other software to help automate log analysis and monitoring. Integrate these tools to keep an eye on logs that show red flags towards system security.

Sort log files and merge them according to how they relate and their importance. You can choose to merge application logs and network logs in the same tracking system. This cuts it on having to manage different log files all at once.

Set log alerts based on security protocol and integrate with issue tracking or messaging channels like slack to notify IT teams of events that go against established protocols.

Also, have remote storage for your log events as with the local this saves events of log tampering by cybercriminals. Other remote log records can help in investigations.

Regularly review log activities.

Keep log information that is unique to your business operations and peculiar system needs.