Gravwell 4.1 introduces a new module - Enrich - that can add static data to every entry in a query. Sometimes you need to add static data to a dataset, such as the standard deviation itself across all entries in the dataset or annotations about the query, or you may want to fuse several data points from a resource. The enrich module provides this simple but important feature.
Sometimes, you just need to get data into Gravwell without setting up any ingesters--maybe you want to analyze a collection of log files somebody emailed you, or maybe you've got a pcap file from Wireshark. We've had command-line tools for this for years, but with Gravwell 4.1.0 we're pleased to announce a new feature: a flexible and easy-to-use interface for ingesting data inside the web interface! This UI lets you drag-and-drop line-delimited logs, packet capture files, or entries downloaded from a Gravwell query; Gravwell will figure out what you gave it and parse it appropriately.
The Gravwell team is happy to announce the release of Gravwell 4.1. A few highlights of what's included in the new release:
- Compound Query support
- Web UI based ingester
- A new “enrich” module
- Temporal mode in the “dump” module
- Internal performance and stability improvements
We’ll have a series of blog posts discussing the various features of Gravwell 4.1, but we wanted to get started with our favorites - Compound Queries.
This week sees the release of Gravwell 3.3.9, our last planned release prior to the 3.4.0 "Big Bang" release. The Big Bang release will introduce Gravwell kits (our way of providing pre-packaged dashboards, resources, SOAR scripts, and more) plus lots of new user interface features. But first, let's talk about 3.3.0. This relatively boring release is mostly comprised of bug fixes, a new timegrinder timestamp, and one UI tweak. Full change log available here.
One of the exciting new features in Gravwell 3.3.0 is search macros. Anyone who's experimented much with Gravwell knows you may often end up crafting a long and complex regular expression which you'll want to use over and over, but such a long regex makes the query hard to work with. Macros let you turn that long regular expression (or any other part of a search query) into a much shorter name you can use again and again.
We are excited to announce the immediate availability of Gravwell version 3.3.0. This release being a Minor release features a few fairly significant features and a whole heap of bug fixes and performance improvements. Over the next couple of days we will be doing a series of blog posts for this release detailing each of the new things in Gravwell, but first we need need to show off the centerpiece of this release, Overwatch.
With Gravwell 3.2.4 we've introduced a new search module: kv, short for 'key-value'. This module is designed to help you extract key-value data from text entries without having to hand-craft regular expressions. It also interfaces with the fulltext indexer automatically, so you can analyze your indexed data more quickly.
We are happy to announce the immediate availability of Gravwell version 3.2.0!
We are excited to introduce autoextractors with Gravwell version 3.0.2. Autoextractors make it easy for regex gurus and binary ninjas to generate extractions and share them in a portable format. Autoextractors can dramatically simplify the process of performing field extractions from unstructured data without complicated time-of-ingest data definitions; they can built and shared by ninjas and and used by us mere mortals.
We're continuing to work with investigative reporters to research unscrupulous activity on social media. Most recently, Engadget published a piece on nefarious political influencers on Reddit. We’ve written in the past about analyzing social media comments, but didn’t make the ingesters publicly available. With an increasing need for research in this area, we decided that releasing our Reddit and Hacker News ingesters could help new users get started with Gravwell even faster, so we open-sourced them. Read on to learn how to get the ingesters, how to run them, and how to get started with the data.