The Gravwell HTTP ingester now supports a default config block that's compatible with Splunk HEC ingester defaults. To show this in action, we will use an awesome attacker simulation tool, Scythe and our old pal Sysmon and also tease upcoming purple team content.
(This post is part one of a two-part technology series around building and using an IPMI ingester and kit. Part two coming soon.)
In many data aggregation and analysis tools, the ecosystem is fully closed source, and often even data ingest protocols are proprietary. This means that if you want to ingest a novel data format of your own, you’re either, a) $%*! out of luck, or b) forced to collapse your data into some form of low performance, textual, line-delimited data that a generic log ingester will work with.
At Gravwell HQ, we take a different approach. All of our ingesters are open source and freely available under a BSD license, and our ingest framework is open and available as a Go library. In this post, we’ll be taking a tour of how we wrote a real and officially supported Gravwell ingester: the new Gravwell IPMI Ingester. We’ll cover how we manage configuration files, set up and manage indexer connections, and transform IPMI data into a flexible JSON schema before sending it out.
In today's blog, we’ll give a short overview of the transaction module introduced in our most recent update: Gravwell 4.1.5. The transaction module is a powerful module that can rewrite individual entries into grouped entries based on any number of keys--essentially, the transaction module allows you to collate entries based on a given criteria.
Gravwell launched our free Community Edition in July 2018, and it has become an invaluable resource for home lab users and anyone looking to monitor their personal network or wrangle large amounts of data (up to 2GB/day) into actionable insights. In this blog post, Dustin Finn, one of our first CE users and recipient of the inaugural “CE User of the Year” Award, shares some of the cool projects he’s working on using Gravwell Community Edition.
SC Magazine published an article headlined "SIEM rules ignore bulk of MITRE ATT&CK framework, placing risk burden on users." In the article, Bradley Barth writes about a study showing only 16 percent of the MITRE framework was covered by SIEM rules.
I take issue with the core premise of this article. MITRE ATT&CK is a framework for high level planning and strategic thinking, not for a series of checkboxes on which to overlay a vendor product. We need to avoid turning cybersecurity into checkboxes. What do I mean? Read on to hear my thoughts on the SC Magazine article, and to see how we work with customers to improve observability without forcing them to fit a pre-defined mold.
We are pleased to announce the immediate availability of the Gravwell Sysmon kit. This kit is designed to get you started quickly with Sysmon data and demonstrate the art of the possible. This post will cover the basic contents of the kit and then we will perform a quick investigation of a process that probably shouldn't be running on a corporate machine.
One of Gravwell's great strengths is binary ingest: you can store things like raw packets, then parse them later when you know what you want to extract. This came in handy recently when I set up IPv6 on my home network and wanted to keep an eye on who's issuing Router Advertisement (RA) messages. A RA message by itself isn't very helpful, since you just get a MAC address and an IPv6 link-local address, but with a little bit of Gravwell query magic, I was able to parse out ARP packets to link the IPv6 address to an IPv4 address, which helps identify the machine.
Version 3.7.0 of the Gravwell open source repository introduces an exciting new feature: a Go library for interacting directly with Gravwell! Our Data Fusion platform has always been about meeting custom analytics needs and not forcing clients onto limited rails for dashboarding, searching, etc. Out-of-the-box only gets you so far, and beyond is where our customers get into doing some really, really cool stuff.
Open sourcing the Gravwell client library makes it much faster for users to get any custom code up and running - which means less time to ingestion, automation, alerting, and other juicy data goodness. This post will show how to instantiate & authenticate a client, then give a few examples of what you can do.
Gravwell 4.1 introduces a new module - Enrich - that can add static data to every entry in a query. Sometimes you need to add static data to a dataset, such as the standard deviation itself across all entries in the dataset or annotations about the query, or you may want to fuse several data points from a resource. The enrich module provides this simple but important feature.
Sometimes, you just need to get data into Gravwell without setting up any ingesters--maybe you want to analyze a collection of log files somebody emailed you, or maybe you've got a pcap file from Wireshark. We've had command-line tools for this for years, but with Gravwell 4.1.0 we're pleased to announce a new feature: a flexible and easy-to-use interface for ingesting data inside the web interface! This UI lets you drag-and-drop line-delimited logs, packet capture files, or entries downloaded from a Gravwell query; Gravwell will figure out what you gave it and parse it appropriately.