Leveraging Raytracers for Fun and Profit.
We’ve exposed a couple of methods to explore graphs in 3 dimensions in earlier blog posts. We were interested in realtime rendering in order to navigate inside our security data interactively. We’ve provided videos and online demos of our 3D engine but also details about the force-directed clustering algorithms. These technical details progressively give an overview of the visualization framework we are building here at OpenDNS.
Today, we will explain a fun way to take advantage of legendary raytracers to produce realistic pictures of our data. We will also share some useful pieces of code and scripts to enable the reader to do the same on his own.
A quick word on raytracers and PovRay
In the 3D graphics world, there are several types of image rendering. The 2 main techniques are called Rastering and Raytracing. We could go for hours about all the details but it is important to understand the key differences :
Rasterization works by drawing a very large number of triangles on the screen which are then shaded through a GPU pipeline simulating various light models. The resulting image is computed by projecting all the triangles on the screen. The process is extremely fast and is widely used in applications utilizing realtime graphics, requiring a very low latency. (Games, Simulators, Interactive interfaces, etc.)
Raytracing works differently. It generates an image by tracing the path of light through pixels in an image plane. The light rays traverse the scene according to optics laws (Reflection, Refraction, Diffraction, Absorption, etc.). The process aims to give a high level of realism and comes with a great computational cost. Raytracing is widely used in 3D movies and a single image can often take several hours to be generated.
PovRay (“Persistence of Vision Raytracer”) belongs to the second category. The interesting thing about PovRay is that it can be scripted and scenes can be described using a simple language. We’re going to take advantage of this to generate our 3D scene from the graph datasets!
First step : Create a PovRay template
The first thing we need is a PovRay template. We need to define a standard scene and a way to draw nodes and edges.
Very simple indeed, we’ll start by defining some constants, placing a couple of lights and a camera.
#include "colors.inc" global_settings { assumed_gamma 1.0 ambient_light Gray } background { color Black } light_source { <10000, 0, 0> color White } light_source { <0, 10000, 0> color White } light_source { <0, 0, 100000> color White } camera { sky <0, 1, 0> right <4/3, 0, 0> look_at <0, 0, 0> location <5.0, 5.0, 5.0> }
More documentation on lights and camera can be found here :
- http://www.povray.org/documentation/view/3.7.0/308/
- http://www.povray.org/documentation/view/3.7.0/246/
And we’ll just use spheres for the nodes and cylinders for the edges.
As an example, we’ll just define two spheres and one cylinder connecting them.
// First node sphere { <-2.0, 0.0, 0.0>, 1.0 // Position : (-2, 0, 0), Radius : 1.0 texture { pigment { color rgb<1.0, 0.0, 0.0> }} // Color : Red finish { ambient .2 diffuse .6 specular .75 roughness .001 reflection { 0.2 } } } // Second node sphere { <2.0, 0.0, 0.0>, 1.0 // Position : (2, 0, 0), Radius : 1.0 texture { pigment { color rgb<0.0, 0.0, 1.0> }} // Color : Blue finish { ambient .2 diffuse .6 specular .75 roughness .001 reflection { 0.2 } } } cylinder { <-2.0, 0.0, 0.0>, <2.0, 0.0, 0.0>, 0.5 // Radius : 0.5, from node 1 to 2 texture { pigment { color rgb<0.5, 0.5, 0.5> }} finish { ambient .2 diffuse .6 specular .75 roughness .001 reflection { 0.5 } } }
In the “finish” scope, we define the shading of the objects. Essentially it tells PovRay how the spheres/cylinders are going to react to light. In our case, we’ve defined a nice shiny material with some reflections.
Now if you render this template with PovRay, you should get something like this :
Second step : Convert a 3D graph to a PovRay scene
Now that we have our template, the concept is very simple. We take our graph, create a sphere for each node and then for each edge, create a cylinder connecting the two nodes. And voilà!
The pseudo-code to do it would look like this:
sphere_radius = 1.0 cylinder_radius = 0.5 foreach node in graph.nodes do generate_sphere(node.position, sphere_radius, node.color) foreach edge in graph.edges do generate_cylinder(edge.src.position, edge.dst.position, cylinder_radius, edge.color)
Of course, you can adapt it to map the color/radius to anything else (Node weight, Node degree, Edge type …). It’s up to you!
It also means you will need to have the node position already pre-computed using any layout algorithm. You can implement your own or export it from any graph rendering software.
Results
Here is a small gallery of datasets obtained using the OpenDNS visualization tool and some some datasets built using our mighty Security Graph:




Tips:
- Depending on the graph you’re dealing with, you will probably need to adjust the camera position to choose a good view point.
- When rendering scenes with raytracers, the image calculation can take a while especially with very large graphs. It is often a good idea to compute the image with a small resolution and a low level of details. This way you will be able to quickly see if your parameters are correct before spending precious minutes/hours rendering a erroneous image. Once you are sure the basic parameters are correct, you can launch the real rendering process with full quality.
- To achieve the nice “blobby” effect you can see on some of the images, you will need to define a blob scope and a couple of iso-values / thresholds. More documentation here : http://www.povray.org/documentation/view/3.7.0/275/
Conclusion
We’ve presented a fun way to take advantage of PovRay to generate realistic pictures of our datasets created with the Security Graph and its API. We hope you found this article interesting and that it hopefully triggered some new ideas. We’ve only used a very tiny part of PovRay’s features but you can definitely go much deeper in refining your template to achieve mind-blowing results. Feel free to send and share your best work!