Gravwell is an enterprise data fusion platform that enables security teams to investigate, collaborate, and analyze data from any source, on demand, all with unlimited data collection and retention. Ingest everything. Investigate anything.
Today we released Gravwell 3.3.11, hot on the heels of last week's 3.3.10. In our previous post, we'd said that 3.3.9 was the final planned release before our big 3.4.0 version, but there were a few important fixes we wanted to get out ASAP! These two releases were almost entirely bug-fixes, except for two little features we snuck in; we'll start by talking about the bug-fixes first and save the fun stuff for the end!
Gravwell 3.3.11 includes a fix for one of the most interesting bugs we've encountered in a while. The fulltext indexing system includes options that let you exclude certain timestamps and floating-point numbers from indexing--if your log format includes nanosecond timestamps, you want to leave those out or else your index will get huge! Unfortunately our heuristics were a little too simplistic if you enabled both the floating-point and timestamp filters; when configured with both filters and given an entry containing a timestamp which can also be parsed as a floating point number e.g. "1588966630.443406493 3imfJmUKtwtISTcIF 188.8.131.52", the indexer would decide that the IP address was a timestamp and drop it. Luckily the fix was simple and we're now indexing everything we should.
We also found and fixed some logic in the replication system which was causing excessive memory consumption in certain corner cases. A particular sequence of events could get a replication server into a pathological state where all further updates to a particular shard would result in a big memory leak; we've fixed the memory leak and tightened up the logic in a way that should improve replication speeds all around.
There were two small errors around search parsing & handling that we caught and fixed. Up until now, explicitly passing the Location enumerated value as an argument to the pointmap renderer would result in an error; this no longer happens. We also noticed that the ipfix module was complaining if you tried to extract multiple numeric IPFIX fields, e.g. "ipfix 0:1 0:2" would return a parser error. This was the result of an improperly-implemented check to prevent collisions on enumerated value names during extractions and has been fixed.
Another interesting corner case we discovered this week involved ingesters which had the regular expression router preprocessor configured. When such an ingester attempted to connect to more than one indexer, a race could occur in which the preprocessor attempted to negotiate additional tags before all connections were completely established. This would cause those partially-established connections to be dropped, meaning all entries were sent to one or two indexers rather than properly distributed across all of them. We've updated the ingest library to take care of this case.
The most exciting new feature is in the lookup module. This module can look up enumerated values in a table and extract other fields from matching rows--for example, one might look up a MAC address in order to find a corresponding hostname. Up until now, the module could only match against one column, but Gravwell 3.3.10 introduced multi-column matching. For example, the following query will find the 25 least common ports seen in Netflow records:
tag=netflow netflow Port Src Dst SrcPort DstPort Bytes Pkts Protocol Duration
| stats count by Port
| sort by count asc | limit 25
| lookup -r network_services [Protocol Port] [proto_number service_port] (service_name service_desc)
| lookup -r ip_protocols Protocol Number Name as ProtocolName
| table Port service_name count Src SrcPort Dst DstPort Bytes Pkts Protocol Duration service_desc
The first invocation of the lookup module ("lookup -r network_services [Protocol Port] [proto_number service_port] (service_name service_desc)") illustrates the new multi-column matching. It says that the Protocol and Port enumerated values should be compared against the proto_number and service_port columns (respectively) of the specified resource; when it finds a row which matches in both columns, it extracts the columns named service_name and service_desc.
The other new feature is a purely utilitarian one: the MULTICAST filtering alias. Gravwell has included the PRIVATE alias for some time, allowing you to easily filter IPs from any private subnet e.g. "tag=pcap packet ipv4.SrcIP ~ PRIVATE" to see all packets with a source IP in a private network. The MULTICAST alias can be used in exactly the same way to filter IPs in multicast subnets, both IPv4 and IPv6.
These two releases are primarily focused on cleaning up some backend bugs while we prepare 3.4.0 for general release. Meanwhile, the GUI team has been working full-steam on getting our overhauled interface ready for the big kits release, and some of their new features are truly exciting! We'll be sharing more about the new features in coming weeks.
If you don't yet have Gravwell at all, you can sign up for a Free Trial by clicking the button below:
Topics: Community Edition
Written by John Floren
John's been writing Go since before it was cool and developing distributed systems for almost as long.