Logs…They try to tell you what’s going on in a system, but it takes a special kind of patience to read through hundreds of thousands of lines of machine generated text full of arcane errors and differing timestamps.
As a security analyst, part of my job involves looking at DNS logs for potential customers and showing what they might have on their network as well as what OpenDNS would have blocked. In these reviews, we don’t have access to the systems or logs from other events to provide extra context. We typically only have information from BIND, Windows Microsoft Entra ID, InfoBlox or another vendor or service. We basically perform DNS incident response using several techniques to speed up the process and help make sense of the information.
Eyeballing log files line by line isn’t going to get us anything more than a headache. It’s much smarter to tackle the problem programmatically.
When working with DNS logs, we tend to follow these steps.
- Sanitize the data
- Sort and unique the data
- Analyze the data
- Report
When we first acquire a log file, it has its own special format. We have to convert the data to something we can work with. If we’re just trying to find the bad stuff calling out to the bad places, we don’t need much more than domain names, so we will isolate that part of the file.
The most useful logs we get from a usability perspective is one in CSV format. If you’re running some version of MS Office, Excel will try to open this kind of file. However, it’s not recommended as Excel can only handle so much data before eating all the memory in a system and seizing up. We almost exclusively work in the command prompt and with Python scripts.
CSV is actually text, but the values are comma separated, which means maybe just a couple fewer lines of typing. If we have a log file in CSV format, it might look like this when viewed through the terminal:

To get just the domains, we can run the ‘awk’ command to print a specific column. For these lines, the domain contact is in column seven, with each column separated by commas. The command to type in this case is:
cat example.txt | awk -F, '{print $7}'
which prints the following:
This can be sent to a file for use during the analysis portion, like so:
cat example.txt | awk -F, '{print $7}' > justthedomains.txt
That was an easy example. Often, the logs are much more complex. As an example, here are some of the top lines from a two gigabyte file:

We need to get the domains out of this file too, but first we have to remove these unnecessary fields from the top using Vim:
#Software: SGOS 6.5.5.1
#Version: 1.0
#Start-Date: 2015-09-07 09:00:00
#Date: 2015-09-03 16:22:16
#Fields: date time time-taken i <snip>
We are now left with a large file of lines that look like this:
2015-09-07 09:00:02 169 192.168.1.223 - - - OBSERVED "Technology/Internet" - 200 TCP_NC_MISS POST image/jpeg http search.namequery.com 80 / - - "Mozilla/5.0 (compatible; MSIE 8.0;)" 10.251.106.45 239 233 - "none" "none"
Continuing forward to get just those domains, columns of information for each line are separated by spaces, so we can grab the domains using ‘awk’ again. The ‘awk’ command automatically uses the space as a file delimiter, so we don’t have to specify a different delimiter like in the CSV example (we used the ‘-F,’ switch to use a comma as the delimiter in that example).
That didn’t go so well. The lines are not in perfect columns. It looks like we will have to find a different way to grab those domains.
The following python script achieves what we want:
import re
from urlparse import urlparse
with open('DNS_logs.log') as f:
for eachline in f:
urlsearch = re.findall(r'(https?://S+)', eachline)
url = str(urlsearch).replace('['','').replace('']','').replace('[]','')
url_components = urlparse(url)
if "http" in url_components:
justthedomain = url_components.netloc
print justthedomain
Here are the results (of just a small part) after running the script:
We’re left with a list of domains, some of which are duplicates (because they’re contacted multiple times by the client machines). To make processing faster during the analysis stage, we want to remove duplicates. Assuming we ran the python script and sent its output to a text file called domains.txt, we can then get just the unique domains:
sort -u domains.txt
If that list is sent to a new file called unique_domains.txt, we can then run the domains through OpenDNS Investigate using its API to get all kinds of information, including domain score (which determines if a domain is considered malicious or benign), Whois details, ASN information, related domains, and more.
Using the Investigate API is straightforward and well documented. Investigate makes it possible to send a list of domains (we have sent millions at a time) using urllib2, receive a JSON document and parse through it, writing the results to a file for quick analysis.
Going over everything that’s possible with the Investigate API is beyond this post, but the following example demonstrates how to gather Whois information for a domain. We’ll be using a domain from the CSV file we first looked at: monarchestatemanagement[.]com
from urllib2 import Request,
urlopen import json api_key = 'Your Investigate API Key'
headers = {'Authorization': 'Bearer ' + api_key}
request = Request('https://investigate.api.opendns.com/whois/monarchestatemanagement.com.json', headers=headers)
response_body = urlopen(request).read()
values = json.loads(response_body)
print values['registrarName'] + ',' + values['expires']
Running this prints the fields we requested, the registrar and the expiration date:
We are able to acquire more information than just Whois on our list of domains with the Investigate API. Using the same domain as in the previous example, the original logs show allowed communication to monarchestatemanagement[.]com. However, looking at it with Investigate API, we learn that this domain would have been blocked if they were using OpenDNS (the screenshot is from the web interface for Investigate):
Log analysis doesn’t have to be boring. This is really just the tip of the iceberg.
We are always exploring new ideas in this area. One of the more interesting ways we look at logs is by sending them with Logstash to an ElasticSearch cluster for visual analysis with Kibana.
The technologies are out there to enable you to get out of your text editor and into a better place.