DeepSec2019 Talk: IPFS As a Distributed Alternative to Logs Collection – Fabio Nigi
Logging stuff is easy. You take a piece of information created by the infrastructure, systems, or applications and stash it away. The problems start once you want to use the stored log data for analysis, reference, correlation, or any other more sophisticated approach. At DeepSec 2019 Fabio Nigi will share his experience in dealing with log data. We asked him to explain what you can expect from his presentation.
We want access to as much logs as possible. Historically the approach is to replicate logs to a central location. The cost of storage is the bottleneck on security information and event management (SIEM) solution, hard to be maintained at scale, leading to reduce the amount of information at disposal. The state-of-the-art solutions today focus on to analyze the log on the endpoint. This can optimize the maintenance but add the problem on updating the rules or accessing raw data. Both of the approaches are inefficient and expensive.
What we want from logs collection:
- Inference and baselines
- Replication on topics
- On demand access and drilldown with hashable/forensic history of status
- Ownership: data need to point 1:1 to endpoint/people
Granting access to all endpoints hosts logs, grant at least the requirements above, with 0 storage cost and low maintenance.
This can be achieved applying the logic of non-centralized web distribution used in IPFS/IPNS protocol to log collection.
What are you going to take away from the Talk?
- IPFS protocol explanation and features
- How to modify the FOSS ipfs client, to make it “log friendly” and transparent to the user
- How to define a private cluster, key management, IPNS (DNS): This will grant encryption on transit and on storage
- How to define a IPFS gateway to collect the information using classic HTTP API
- How to integrate the solution via the SIEM solution you have in place: This will grant the possibility to use the playbook already designed
Properties assured by the protocol include:
- Each log file and all of the blocks within it are given a unique fingerprint called a cryptographic hash.
- IPFS removes duplicates across the network.
- Each network node stores only content it is interested in, and some indexing information that helps figure out who is storing what.
- When looking up files, you’re asking the network to find nodes storing the content behind a unique hash.
- Every file can be found by human-readable names using a decentralized naming system called IPNS.
Fabio Nigi, head of security operation at Philip Morris Digital, former security investigator at Cisco CSIRT. During and after his engineering degree in Computer Science, Fabio focused on Ethical Hacking, spent 10 years researching, analyzing and solving ICT Governance, Risk, Compliance, Information Security and Privacy issues as SMEs in Enterprise global environments.
His Linkedin Profile can be found here: https://www.linkedin.com/in/fabionigi/