Greetings from the R&D department of Gravwell! We’re here today to show you a sneak peek of one of many features coming in our next release, Gravwell 4.2.0.
Compare Scalability, Cost, and Performance
There have been no shortage of self-proclaimed "Splunk Killers" and log analytics products throughout the years as hype and buzzwords get thrown about like candy at a parade. We know... we personally experienced this problem. Unlike candy, however, these offerings left a rotten taste in our mouths. If you're in the market for a log management platform and you're evaluating Gravwell, or any other tools, there are some crucial factors to consider. In this post we'll go through 5 important questions to ask that can help identify whether a solution may be a sweet fit.
I often quote Spaf who says "A system is good if it does what it's supposed to do, and secure if it doesn't do anything else." Making our systems secure requires a few things. We first have to know what the system is supposed to do, but that's usually not where things start with cybersecurity, probably because that's hard and it requires a bit of Know Thyself, at which we are terrible. Instead we start at "well, what do we know it's not supposed to do FOR SURE". Obviously, systems shouldn't be executing malware. Detecting malware via hash, or signature, or known behavior is looking for that "known bad". This is not threat hunting. This is detection. This is putting up a most wanted list to catch criminals.
As applications generate more data, as we adopt more IoT, and as more things move to cloud, log volumes explode. Traditional log management solutions have trouble keeping up and cause major budgeting issues with outdated pricing models. We're seeing more and more organizations outgrowing legacy log centralization products. When the difficult choices of which data to keep and which to throw away are being made, valuable information is lost. Events are crucial for gauging health of services, troubleshooting issues, and hunting cybersecurity threats. When the data isn't available, the business (and its customers) suffer.
Enhance Security by Removing Limits
SIEMs have historically done well in helping organizations detect threats. Modern threat activity has shown, however, that tracking pre-selected data and relying on IOCs (indicators of compromise) isn't enough to protect business from attackers. Threat hunting and going off the rails of "pre-fabbed search" are absolutely critical to defending organizations. You don't have to read very much Sun Tzu to learn the importance of "Know Thyself" and defenders advantage. SIEMs have let us down in this area. Gravwell provides a solution that removes limits and puts you in control of what data you can collect, and what questions you can ask.
Welcome back to Gravwell HQ! Today we bring you the second post in our two-part blog series on building IPMI ingest and analysis tools. In part one we walked through building an ingester from scratch, and gave an overview of IPMI. In this post, we’ll be taking a tour of how we made our officially supported Gravwell IPMI kit. We’ll go through everything from macros, queries, templates and dashboards, to kit packaging. There’s a lot of great info to cover, so let’s get started!
The Gravwell HTTP ingester now supports a default config block that's compatible with Splunk HEC ingester defaults. To show this in action, we will use an awesome attacker simulation tool, Scythe and our old pal Sysmon and also tease upcoming purple team content.
(This post is part one of a two-part technology series around building and using an IPMI ingester and kit. Part two coming soon.)
In many data aggregation and analysis tools, the ecosystem is fully closed source, and often even data ingest protocols are proprietary. This means that if you want to ingest a novel data format of your own, you’re either, a) $%*! out of luck, or b) forced to collapse your data into some form of low performance, textual, line-delimited data that a generic log ingester will work with.
At Gravwell HQ, we take a different approach. All of our ingesters are open source and freely available under a BSD license, and our ingest framework is open and available as a Go library. In this post, we’ll be taking a tour of how we wrote a real and officially supported Gravwell ingester: the new Gravwell IPMI Ingester. We’ll cover how we manage configuration files, set up and manage indexer connections, and transform IPMI data into a flexible JSON schema before sending it out.
In today's blog, we’ll give a short overview of the transaction module introduced in our most recent update: Gravwell 4.1.5. The transaction module is a powerful module that can rewrite individual entries into grouped entries based on any number of keys--essentially, the transaction module allows you to collate entries based on a given criteria.
Gravwell launched our free Community Edition in July 2018, and it has become an invaluable resource for home lab users and anyone looking to monitor their personal network or wrangle large amounts of data (up to 2GB/day) into actionable insights. In this blog post, Dustin Finn, one of our first CE users and recipient of the inaugural “CE User of the Year” Award, shares some of the cool projects he’s working on using Gravwell Community Edition.