This week marks the release of a Gravwell version 2. It’s been a journey with plenty of long days and nights but we’re really excited about the new capabilities. We’ll be publishing a series of blog posts which go into details of the major points, but I’d like to discuss the highlights.

The largest area of improvement in this release is on enterprise features. Gravwell 2.0 has support for high availability systems and includes replication and age-out features to help further our “leave no data behind” mantra. Now, customers can configure cloud or local deployments for exactly the right amount of data retention and compliance. The age-out system allows for flexible configuration based on time or storage size for moving older data out of high performance storage for longer storage with less frequent access. We’ve also improved our federator capabilities so more customized deployments (e.g. multiple frontends or collecting through VPNs) are easy to orchestrate and Gravwell data (like search history) is automatically synchronized throughout the cluster.

Searching has received a MAJOR power increase with the addition of an in-pipeline anko module. Gravwell now allows for custom scripts to run in pipeline which means turing-complete analytics capability. Gravwell users can rely on the throughput of the custom Gravwell storage and retrieval system and focus on the aspects that they actually care about: data analytics. The new search modules allow data scientists to introduce advanced capabilities and their own secret sauce directly into the pipeline for proprietary analytics, custom machine learning, and more.

Also on the search front is a the new resource system which allows for lookups and data enrichment. Data can be uploaded as a resource and used to enrich entries in-pipeline for analysis later. For example, we’re using this internally to fuse our Ubiquiti wifi access point logs with DNS hostnames to translate MAC addresses into the actual device. So now it’s a lot easier to see when Bob is in and out of the office or rogue devices show up. These lookup data resources can also be created with Gravwell queries themselves and enable things like whitelisting. This is particularly relevant in the OT space where Gravwell can search all network traffic to identify communication channels into a process controller, whitelist those channels, and then easily identify non-approved communications.

There are a few other search modules and changes that improve general quality of life for our beloved users. Notably, it’s now possible to export search results via the user interface instead of only on the command line.

On the orchestration front we have tweaked configuration to improve the DevOps cycle. In an upcoming post, Gravwell founder Kris Watts will exercise this power by building and deploying a 6 node Gravwell cluster in only 5 command lines. One of the reasons we started this company was a dissatisfaction with the "time to value" requirements of other data analytics solutions. It's painful to have to spend significant time and resources just to get some solution spun up and ingesting (normalized) data so you can begin analysis -- sometimes it takes as long as two years before value can be realized. Gravwell, on the other hand, deploys in seconds and can begin ingesting anything and everything right away. Value begins immediately and increases over time as queries become more sophisticated and dashboards are better developed. The unstructured and flexible nature of the Gravwell tech stack enables a faster time to value. I met with a hunt teamer at a large incident response services company who put it succinctly:

"I can throw my syslogs, windows events, pcaps, suspect binaries, indicator hits, or whatever... and search through it all at once?! Holy %*@&!"

 

On the data ingest side we continue to include new ingesters as customers come on line or have specific needs. This release cycle marks publication of Netflow v5 and Google Cloud logging and PubSub (GCP Stackdriver).

Finally, we continue our unending quest for performance and our benchmarks are looking better than ever. For example, optimizations around single node deployments have resulted in a 10x speedup on search times. We care about all Gravwell customers whether they’re large clusters or single node users.

There’s a LOT to this release and I’m looking forward to the upcoming posts that go into further detail. The roadmap has some pretty exciting stuff coming ahead as well.

For those of you heading to San Francisco for RSA, be sure to stop by the ICS Village in the RSA Sandbox area to see Gravwell in action as the analytics platform for the unified SOC monitoring the event. If you’d like to schedule a meeting, email info@gravwell.io and we’d be happy to chat.