Gravwell is designed to work with your data, in your infrastructure, and within your constraints. Whether you have petabytes of packet capture, data-at-rest sensitivity requirements, or are simply integrating existing infrastructure, Gravwell is built to enable a workflow that meets your needs. Today we’ll look at an example integration with multiple Google Stenographer installations, our new Gravwell Packet Fleet ingester, and a powerful new feature in Gravwell Big Bang - Actionables.
Today we released Gravwell 3.3.11, hot on the heels of last week's 3.3.10. In our previous post, we'd said that 3.3.9 was the final planned release before our big 3.4.0 version, but there were a few important fixes we wanted to get out ASAP! These two releases were almost entirely bug-fixes, except for two little features we snuck in; we'll start by talking about the bug-fixes first and save the fun stuff for the end!
Some time back, I built a small, hydroponic garden in my garage to grow fresh veggies year round. Although I avoided a few hazards of traditional gardening, moving my garden inside proved to have its own set of challenges. I eventually realized that I could better manage my plants if I had a means to continually monitor their condition. Using an Arduino, a few sensors, and a tiny web server, I started collecting and accumulating data about my garden. It didn't take long before the amount of accumulated sensor data became cumbersome to look through. However, after importing the data into Gravwell, I was able to quickly visualize historical sensor information and gain new insights to make my thumb a little greener.
This week sees the release of Gravwell 3.3.9, our last planned release prior to the 3.4.0 "Big Bang" release. The Big Bang release will introduce Gravwell kits (our way of providing pre-packaged dashboards, resources, SOAR scripts, and more) plus lots of new user interface features. But first, let's talk about 3.3.0. This relatively boring release is mostly comprised of bug fixes, a new timegrinder timestamp, and one UI tweak. Full change log available here.
Gravwell's ingesters can pull data from a wide variety of sources and we advocate keeping raw data formats for root cause analysis, but sometimes it's nice to massage the data a little before sending it to the indexers. Maybe you're getting JSON data sent over syslog and would like to strip out the syslog headers. Maybe you're getting gzip-compressed data from an Apache Kafka stream. Maybe you'd like to be able to route entries to different tags based on the contents of the entries. Gravwell's ingest preprocessors make this possible by inserting one or more processing steps before an entry is sent upstream to the indexer.
Gravwell has officially supported Netflow v5 and IPFIX for some time. As of Gravwell 3.3.3, we're happy to announce that we now support Netflow v9 as well! This post will talk about the essential differences between Netflow v9 and IPFIX, how we implemented support, and how to get up and running with Netflow v9 ingest. We'll also talk about some pretty serious efficiency improvements we made in our IPFIX/Netflow v9 parsing module.
If your enterprise is using Office 365, your users are generating log entries every time they log in, upload files to OneDrive, send an email--the logging is pretty extensive! You can analyze these log events in the O365 console, but wouldn't it be nice to pull them into Gravwell and correlate with other data sources? Thanks to the new Office 365 ingester, you can.
One of the exciting new features in Gravwell 3.3.0 is search macros. Anyone who's experimented much with Gravwell knows you may often end up crafting a long and complex regular expression which you'll want to use over and over, but such a long regex makes the query hard to work with. Macros let you turn that long regular expression (or any other part of a search query) into a much shorter name you can use again and again.
We are excited to announce the immediate availability of Gravwell version 3.3.0. This release being a Minor release features a few fairly significant features and a whole heap of bug fixes and performance improvements. Over the next couple of days we will be doing a series of blog posts for this release detailing each of the new things in Gravwell, but first we need need to show off the centerpiece of this release, Overwatch.
When we started Gravwell years ago, we knew it was going to be a significant undertaking requiring some serious tooling under the hood. Building a custom data lake and analytics platform from scratch that can scale to hundreds of TB/day ain't easy. We chose Go for a lot of reasons and that choice has paid dividends in terms of what we've been able to accomplish in so short a time.
This post is about our tooling, and some of the lessons we have learned along the way in managing a large Go codebase. A few weeks ago Gravwell made the switch to Go modules on both our open source github repositories and our internal repo. Let's talk about about our planned workflow going forward and a few caveats we've run into.