We're excited to join with Nozomi Networks in announcing our integration partnership which was piloted in the ICS Village at the RSA Sandbox in San Francisco earlier this year. Attendees at RSA were also able to see the first glimpse of the newly unveiled ICS Village. For those unfamiliar with conference villages, the idea is to create a hands-on learning environment for security professionals to learn, hack, or break equipment and software that they may not experience on a day-to-day basis. The Gravwell founders have a long history in the ICS space and we believe in the village mission as we think that ICS/SCADA (more so than most industries) could benefit from some disruption and fresh ideas. The ICS Village can be found at many events this year including DEFCON and EnergySec (full event schedule can be found at https://www.icsvillage.com/events).
Thanks to Gravwell's Google PubSub ingester, it's easy to collect logs and other data from services deployed in the Google Cloud Platform. In this blog post, we'll show how to set up Gravwell in GCP and ingest system logs from your virtual machines.
With the release of Gravwell 2.0, Gravwell customers can now deploy multiple webservers tied to a central storage system. This means you can deploy multiple webservers behind a load balancer for better search performance; the webservers synchronize resources, user accounts, dashboards, and search history behind the scenes so users don’t need to worry about which server they’re actually using.
For this blog post we are going to go over the deployment of a distributed Docker-based Gravwell cluster. We will use Docker and a few manageability features to very quickly build and deploy a cluster of Gravwell indexers. By the end of the post we will have deployed a 6 node Gravwell cluster, a load balancing federator, and a couple ingesters. Also, the six node “cluster” is also going to absolutely SCREAM, collecting over 4 million entries per second on a single Ryzen 1700 CPU. You read that right, we are going to crush the ingest rate of every other unstructured data analytics solution available on a single $250 CPU. Lets get started.
This week marks the release of a Gravwell version 2. It’s been a journey with plenty of long days and nights but we’re really excited about the new capabilities. We’ll be publishing a series of blog posts which go into details of the major points, but I’d like to discuss the highlights.
Shmoocon, an InfoSec conference held annually by The Shmoo Group since 2005, is held early each year in Washington, D.C. ShmooCon is a purposely smaller conference, focused on bringing original research to attendees and supporting networking. ShmooCon XIV was held January 19-21 at the Washington Hilton (for those history buffs out there, you might recall that ARPANET made its debut at this hotel in 1972). It is important to us at Gravwell to be involved in the community, so I jumped at the chance to attend this year's Shmoo!
We are going to dive into Windows and show how to get logs flowing into Gravwell in under 5 minutes with the WinEvent ingester. Using the Windows queries we will audit login behavior, RDP usage, some Windows Defender, and identify when Bob from accounting is copying sensitive financial data to external storage devices. Also, Taylor Swift is involved; don't panic, just stay with me.
This Gravwell post is all about the wild world of Windows Event logging and analytics. Both Unix and Windows provide standardized central logging facilities; however, the structure and format of the stored logs are dramatically different. Syslog and most other logging systems with roots in Unix approach logging as an unstructured stream: a log entry is a string of text, no more, no less (we are going to ignore journald and its binary madness). Windows, however, logs all events in fully-formed XML and the logging system is integrated into the operating system itself. We should also note that logging in Windows is... less than ideal. If you are coming from the Unix world, throw out all your assumptions; things are different here.
Amazon’s Kinesis Streams service provides a powerful way to aggregate data (logs, etc.) from a large number of sources and feed that data into multiple data consumers. For instance, a large enterprise might use one Kinesis stream to gather log data from their cloud infrastructure and another stream to aggregate sales data from the web services running on that infrastructure. Once the data is in the stream, it remains available for up to a day (or optionally longer) for any number of applications to read it back for processing and analysis. This is particularly useful to customers that want to deploy and destroy virtual machines on a whim; data is stored in the stream, rather than the ephemeral VMs.
We’re extremely excited to announce a new major release of the Gravwell analytics platform to our testers. It’s been a long road full of interesting (and sometimes annoying) challenges.