The Gravwell HTTP ingester now supports a default config block that's compatible with Splunk HEC ingester defaults. To show this in action, we will use an awesome attacker simulation tool, Scythe and our old pal Sysmon and also tease upcoming purple team content.
(This post is part one of a two-part technology series around building and using an IPMI ingester and kit. Part two coming soon.)
In many data aggregation and analysis tools, the ecosystem is fully closed source, and often even data ingest protocols are proprietary. This means that if you want to ingest a novel data format of your own, you’re either, a) $%*! out of luck, or b) forced to collapse your data into some form of low performance, textual, line-delimited data that a generic log ingester will work with.
At Gravwell HQ, we take a different approach. All of our ingesters are open source and freely available under a BSD license, and our ingest framework is open and available as a Go library. In this post, we’ll be taking a tour of how we wrote a real and officially supported Gravwell ingester: the new Gravwell IPMI Ingester. We’ll cover how we manage configuration files, set up and manage indexer connections, and transform IPMI data into a flexible JSON schema before sending it out.
Sometimes, you just need to get data into Gravwell without setting up any ingesters--maybe you want to analyze a collection of log files somebody emailed you, or maybe you've got a pcap file from Wireshark. We've had command-line tools for this for years, but with Gravwell 4.1.0 we're pleased to announce a new feature: a flexible and easy-to-use interface for ingesting data inside the web interface! This UI lets you drag-and-drop line-delimited logs, packet capture files, or entries downloaded from a Gravwell query; Gravwell will figure out what you gave it and parse it appropriately.
Maybe you've just signed up for Gravwell Community Edition and are not quite sure where to start. There are a lot of features in Gravwell, and a lot of different ingesters for pulling in data. Gravwell 4.0 includes a kit that can collect data without any external ingester--and it helps you analyze everyone's favorite topic, the weather!
In our continuing series of HOWTOs, today we are getting some data into our Gravwell instance setup in Getting Gravwell Installed in 2 Minutes
As with install, setting up your data ingesters is quick and easy.
Gravwell's ingesters can pull data from a wide variety of sources and we advocate keeping raw data formats for root cause analysis, but sometimes it's nice to massage the data a little before sending it to the indexers. Maybe you're getting JSON data sent over syslog and would like to strip out the syslog headers. Maybe you're getting gzip-compressed data from an Apache Kafka stream. Maybe you'd like to be able to route entries to different tags based on the contents of the entries. Gravwell's ingest preprocessors make this possible by inserting one or more processing steps before an entry is sent upstream to the indexer.
We proud to announce the immediate availability of Gravwell version 3.2.3. This release is all about performance and bug fixes, but we did manage to slip in a new Kafka ingester.
We've had some benchmarking requests from multiple organizations struggling with ingest performance from Elasticsearch, so we're publishing them here. The latest Gravwell release marks a significant improvement in ingest and indexing performance and this post covers the nitty gritty details. Better ingest performance means reduced infrastructure cost, less dropped data, and faster time-to-value. See how Gravwell stacks up.
We're continuing to work with investigative reporters to research unscrupulous activity on social media. Most recently, Engadget published a piece on nefarious political influencers on Reddit. We’ve written in the past about analyzing social media comments, but didn’t make the ingesters publicly available. With an increasing need for research in this area, we decided that releasing our Reddit and Hacker News ingesters could help new users get started with Gravwell even faster, so we open-sourced them. Read on to learn how to get the ingesters, how to run them, and how to get started with the data.
Gravwell recently introduced a new ingester which accepts entries via HTTP POST requests. Now it's easy to send arbitrary data to Gravwell via scripts using only the curl command. In this blog post, we'll use the HTTP ingester to build a weather-monitoring dashboard!