Gravwell's ingesters can pull data from a wide variety of sources and we advocate keeping raw data formats for root cause analysis, but sometimes it's nice to massage the data a little before sending it to the indexers. Maybe you're getting JSON data sent over syslog and would like to strip out the syslog headers. Maybe you're getting gzip-compressed data from an Apache Kafka stream. Maybe you'd like to be able to route entries to different tags based on the contents of the entries. Gravwell's ingest preprocessors make this possible by inserting one or more processing steps before an entry is sent upstream to the indexer.
Gravwell has officially supported Netflow v5 and IPFIX for some time. As of Gravwell 3.3.3, we're happy to announce that we now support Netflow v9 as well! This post will talk about the essential differences between Netflow v9 and IPFIX, how we implemented support, and how to get up and running with Netflow v9 ingest. We'll also talk about some pretty serious efficiency improvements we made in our IPFIX/Netflow v9 parsing module.
If your enterprise is using Office 365, your users are generating log entries every time they log in, upload files to OneDrive, send an email--the logging is pretty extensive! You can analyze these log events in the O365 console, but wouldn't it be nice to pull them into Gravwell and correlate with other data sources? Thanks to the new Office 365 ingester, you can.
One of the exciting new features in Gravwell 3.3.0 is search macros. Anyone who's experimented much with Gravwell knows you may often end up crafting a long and complex regular expression which you'll want to use over and over, but such a long regex makes the query hard to work with. Macros let you turn that long regular expression (or any other part of a search query) into a much shorter name you can use again and again.
We are excited to announce the immediate availability of Gravwell version 3.3.0. This release being a Minor release features a few fairly significant features and a whole heap of bug fixes and performance improvements. Over the next couple of days we will be doing a series of blog posts for this release detailing each of the new things in Gravwell, but first we need need to show off the centerpiece of this release, Overwatch.
When we started Gravwell years ago, we knew it was going to be a significant undertaking requiring some serious tooling under the hood. Building a custom data lake and analytics platform from scratch that can scale to hundreds of TB/day ain't easy. We chose Go for a lot of reasons and that choice has paid dividends in terms of what we've been able to accomplish in so short a time.
This post is about our tooling, and some of the lessons we have learned along the way in managing a large Go codebase. A few weeks ago Gravwell made the switch to Go modules on both our open source github repositories and our internal repo. Let's talk about about our planned workflow going forward and a few caveats we've run into.
With Gravwell 3.2.4 we've introduced a new search module: kv, short for 'key-value'. This module is designed to help you extract key-value data from text entries without having to hand-craft regular expressions. It also interfaces with the fulltext indexer automatically, so you can analyze your indexed data more quickly.
We proud to announce the immediate availability of Gravwell version 3.2.3. This release is all about performance and bug fixes, but we did manage to slip in a new Kafka ingester.
We are pleased to announce the immediate availability of Gravwell version 3.2.2!
This one got away from us a bit and probably should be a major release, there is just too much good stuff in here. I tried to convince the team that we should just jump to version 10, but as our GUI lead started choking and muttering something about C'est absurde we decided to stick with a point release.
This personal story I'm about to tell highlights one of the most important differentiators between Gravwell vs Splunk -- a non-abusive pricing model. Data rates aren't always predictable….