Python Libraries for Mesh, Point Cloud, and Data Visualization (Part 2)

Ivan Nikolov
Towards Data Science
19 min readMay 12, 2022

--

This is Part 2 of the tutorial, exploring some of the best libraries for visualization and animation of datasets, point clouds, and meshes. In this part you will get insights and code snippet to get you up and running with Pyrender, PlotOptiX, Polyscope, and Simple-3dviz. Producing stunning interactive visualizations and raytracing renders can be easy!

Examples of outputs from PlotOptiX(left), Pyrender(middle), and Simple-3dviz(right) | Image by the author

In Part 2 of the Python visualization libraries overview, we continue going through some widely known and obscure examples. Two of the libraries that are shown in this article are quite lightweight and give access to a fast generation of visualizations — both manually and programmatically. Polyscope has a large suite of visualization options many of which can be set manually through a GUI interface as the 3D object is shown. Simple-3dviz is a very lightweight implementation, which provides a limited number of visualizations, but is perfect for fast prototyping. On the other hand, there are Pyrender and PlotOptiX. Both libraries require more hardware resources (and in the case of PlotOptiX, an Nvidia card is required), but they provide superior lighting and also raytraced rendering.

If you are interested in using Open3D, Trimesh, PyVista, or Vedo, you can look at Part 1 of the overview tutorial — HERE.

Example of a smooth closed-loop camera trajectory visualization using Simple-3dviz| Image by the author

Again, for each library, the tutorial will go through the installation and setup process, as well as a simple hands-on example demonstrating how to build a minimal working prototype for both visualizing meshes and point clouds, as well as a dataset. Code for all the examples is available at the GitHub repository HERE.

Visualizing weather (Temperature/Humidity) data changes from time point to time point using Polyscope| Image by the author

To follow along, I have provided the angel statue mesh in .obj format HERE and point cloud in .txt format HERE. The object has been featured in several articles [1], [2], [3] and can be also downloaded as part of larger photogrammetry datasets [4], [5]. For demonstrating the visualization of 3D plots, a time-series dataset containing weather data in .csv format is also provided HERE. The weather metadata covers 8 months of diverse weather conditions and is part of long-term data drift research for autoencoders and object detectors[6]. The data is open-source and extracted using the API provided by the Danish Meteorological Institute (DMI). It is free to use in commercial and non-commercial, public, and private projects. For the purpose of working with the dataset, Pandas is required. It comes by default in Anaconda installations and it can be easily installed by calling conda install pandas.

Visualization using Simple-3dviz

Simple-3dviz | Image by the author

The Simple-3dviz library is a lightweight and easy-to-use set of tools for the visualization of meshes and point clouds. It is built on top of wxpython and supports animations, as well as offscreen rendering. Through offscreen rendering, the library also supports the extraction of depth images. In addition to this, two helper functions are provided as part of the library — mesh_viewer for quick visualization of 3D objects and saving screenshots and renderings and func_viewer for fast visualization in 3D of various functions.

The library is limited to a single light source per viewer window, but it can be easily moved around the scene and repositioned. One of the strongest parts of the library is the possibility of easily setting up camera trajectories. The library has several dependencies, some of which are required - like NumPy and moderngl, while others are only necessary if you want to save renders or visualize a GUI like OpenCV and wxpython. This makes the library quite lightweight. If you are using Anaconda, as always it is best to start with creating an environment and then installing the necessary parts through pip. The library runs on Linux and Windows and with small visualization errors connected to lighting in Mac.

conda create -n simple-3dviz_env python=3.8
conda activate simple-3dviz_env
pip install simple-3dviz

Once it has been installed, it is easy to check if everything has been installed correctly by calling import simple-3dviz and calling print(simple-3dviz.__version__) . If the version is shown and no errors are printed, everything is good to go.

To use the mesh_viewer and func_viewer on Windows from anywhere, you need to add Python and the conda environment to the environment variable path of the system. For testing purposes, you can also find the two apps in Anaconda3\envs\your_conda_environment_name\Scripts . In both cases, you will need to call them with their .exe extension. For example, visualizing the angel statue using mesh_viewer is done withpython mesh_viewer.exe \mesh\angelStatue_lp.obj , while visualizing the function x*y³-y*x³ can be done with python func_viewer.exe x*y**3-y*x**3 (here be aware that you should not put the function in quotation marks and you should not have space between the syllables). The results from the func_viewer are shown below.

The built-in app for viewing 2D functions in Simple-3dviz | Image by the author

Loading a 3D object can be done by calling simple-3dviz.from_file() , which has additional inputs if a textured mesh is required. In the case of the angel statue, adding the path to the texture to this function resulted in incorrectly mapped UV coordinates and texture. The more roundabout way that worked was to call Material.with_texture_image() after loading the mesh and loading the texture into a material. Finally, set the material object, as the material of the mesh.

Likewise creating the sphere primitive for the scene was also less obvious. Simple-3dviz does not contain functions to create primitives but has functions to create superquadric surfaces, through the use of Mesh.from_superquadrics() and additionally to create spheres for representation of point clouds through theSphereclouds method. In my case, I decided to go for simplicity and use a single sphere from a point cloud. The code for generating the primitive and loading the mesh is given below.

When creating the material for the mesh, we can set up its visual properties. For generating the sphere we first create a single coordinate position and expand its dimensions. We then add it to the Spherecloud method together with a size and color. Next, the visualization and animation are done directly in the call to the show function. The function can also take calls for keypresses, camera movement, and object manipulations. The code is shown below.

As seen from the code, I use a bit of hacky implementation. As I did not find a way to directly rotate point clouds, I used the RotateModel method to rotate all objects in the scene in one direction. Then I used the RotateRenderables to specify just the mesh and rotate it in the other direction. Finally, the camera position, up direction, background, and image size are set up. The final code is given below.

There are several ways to expand this example. As specified before you can represent the sphere as a superquadric surface. You can also experiment with using different camera trajectories, as Simple-3dviz contains a lot of options — Lines, QuadraticBezierCurves, Circle, as well as repeating movements, going forward and backward between points, etc. Or you can add keyboard and mouse interaction.

Visualization using PlotOptiX (Requires CUDA-enabled GPU)

PlotOptiX result | Image by the author

One of the more interesting libraries that will be explored as part of these articles, PlotOptiX is a 3D ray-tracing package for the visualization of meshes, point clouds, and very large datasets. It produces high fidelity results that benefit from most modern post-processing and effects like tonal correction, denoising, antialiasing, dynamic depth of field, chromatic aberration, etc. The library is extensively used by Meta, Google, and Adobe. It uses the Optix 7.3 framework by Nvidia and on RTX-based cards, it runs with many optimizations. It provides both high-resolution still renderings, as well as real-time animations and interactions. It can directly interface with other libraries like the Python wrapper for the finite-element mesh generation pygmsh or the already mentioned in Part 1 of this article Trimesh.

The library works on Linux, Mac, and Windows and requires a 64-bit Python from 3.6 and up. On Windows, it requires the installation of .Net Framework ≥ 4.6.1, while on Linux it requires the installation of Mono, pythonnet, and FFmpeg. More information on the installation can be read on the GitHub page of PlotOptix. Once all of the prerequisites are installed, the library can be installed in a new Anaconda environment.

conda create -n plototix_env python=3.8
conda activate plototix_env
pip install plotoptix
pip install tk
pip install pyqt5

In my case, I needed to install Tkinter and PyQt as recommended, to be able to visualize the results. Finally, please be aware that PlotOptiX requires an Nvidia card, as it uses many CUDA features. In my case, I have an Nvidia GTX 1070 TI and the library required me to update my drivers to the newest version at the time of writing 512.15. If you get cryptic errors when running the code, a good first troubleshooting step is to update your drivers.

Once everything is installed you can check it by calling import plotoptix and then calling print(plotoptix.__version__). If no problems are detected the next step is to try the “Hello World” example provided on the PlotOptiX GitHub page — HERE. Additional examples can be also installed by calling

python -m plotoptix.install examples

We can load a mesh in PlotOptiX using the load_merged_mesh_obj. As our mesh also has a texture, we load it with load_texture() and update a material with the texture by calling the name_of_material['ColorTexture'] = texture_name . This way when we give this material to the mesh the texture will be automatically assigned. We need to also initialize the viewer for the scene. In PlotOptiX, there are a lot of options to tweak that have adverse effects on the visualization quality and speed. Some of the most important ones are min_accumulation_step and max_accumulation_step . The first one controls how many frames will be accumulated each iteration before the visualization is shown, while the second one specifies the maximum accumulated frames per scene. Higher values for both give cleaner results with less noise, but slow down the visualization extensively. In addition, values of the shader can be explicitly tweaked by calling set_uint() or set_float() methods together with the required variable and new value. In our case, we change the path_seg_range, which changes the range at which each ray will be traced. We do it because we need higher values to properly visualize the glass material that is given to one of the primitive objects. Finally, we can also apply post-processing effects on the rendered visuals by calling add_postproc and specifying the required effect and values for it. In our case, we use a gamma correction and denoising post-processing effects for cleaning and sharpening the visuals. The code for loading the angel mesh and setting up the environment is given below.

In PlotOptiX we can also create a limited number of primitives — spheres, parallelepipeds, parallelograms, tetrahedrons, etc. To create one the function set_data() with a specified geom parameter is needed, where the position, size, and material of the object can also be set. Lights are created using the method setup_light(), where the light can be set as a physical object and visualized by calling in_geometry=True , and its size, color, and position can be explicitly set. Example code for creating a parallelepiped with glass material, together with lights is given below.

The way PlotOptiX handles animation and interactivity is through two types of callback functions — on_scene_compute and on_rt_completed . The first is called before rendering and is used for computing everything needed on the CPU side. In our case, we use it to increment a counter and calculate rotation positions for the light. The second is called after the ray tracing and is used to calculate and draw everything on the GPU side. In our case, we use it to call all the rotation and translation functions for the objects and lights. Both types of functions can be paused, restored, or stopped at any time. In our case, we also use a simple class to hold all the necessary variables that would be used by both functions. The two functions are set to be called when the viewer is initialized. The code for this is given below.

The rotate_geometry is used for easy rotation, where there the center can either be the center of mass of the object or a specified point in space. For the movement of the light, we do not have a specific function for rotation, so we calculate the new position in the compute_changes function and set the new position using the update_light function. The full code for the mesh visualization is given below.

One of the really nice use cases for PlotOptiX is creating raytracing renderings of data plots in 3D. Such visualizations can be especially useful in presentations, as well as video demonstrations and data overviews in front of an audience, where such renderings can be used to catch people’s attention. Currently, the library is missing a way to visualize 3D axes and native 3D text labels, but plotting data from pandas or numpy is extremely easy.

Visualizing weather (Temperature/Humidity/Dew Point) data in 3D using PlotOptiX. The noisy pixels are a result of the lower number of raytracing frame passes.| Image by the author

PlotOptiX is also optimized to display a large number of primitives and leverage RTX cards to speed up visualizations even more. We can demonstrate that by selecting three columns from the weather dataset — temperature, humidity, and dew point. We will use all columns which amount to around 9K data points. We load the data using pandas and transform the three columns into a NumPy array for loading into PlotOptiX’s functions. Once the data is in NumPy we can directly give it at position input to the set_data() method, where we can also change the color and radius of the data points depending on a specific column. The loading of the data and setting of the data points are given in the code below.

Here we also use the map_to_colors method to map the temperature column of the data to a colormap from matplotlib, that PlotOptiX uses. When we set the data we can also set the radius depending on a dataset column. In our case, we set it to the humidity column and scale it with a scalar value. Different geometric primitives can be used for visualizing the data points as well. Two planes are also created for the side and bottom plates around the data points. We create one as a parallelepiped and the other as a 2D parallelogram, to demonstrate how each is created and we give them different predefined materials. Finally, we again use callbacks to create a simple movement pattern for the camera and set up post-processing effects. The full code is given below.

Visualization using Polyscope

Polyscope result | Image by the author

If you need a lightweight and easy to set up and use viewer and user interface generator, Polyscope is one of the best, easy-to-use, and mature libraries out there. It has versions both in C++ and Python and can be used to easily visualize and manipulate point clouds, datasets and meshes either via code or manually through the built-in GUI.

The program contains several pre-build materials and shaders, as well as volume meshes, curve networks, surface meshes, and point clouds. It works on Linux, Mac, and Windows. The only requirement is that the OS needs to support OpenGL>3.3 core and to be able to open display windows. This means that Polyscope does not support headless rendering. It requires Python versions above 3.5. Again, we use Anaconda environments to create an environment and install the library in it through pip or conda.

conda create -n polyscope_env python=3.8
conda activate polyscope_env
pip install polyscope
OR
conda install conda-forge polyscope

Once we have installed the library, we can check if everything works correctly by calling import polyscope and polyscope.init(). If no errors are shown then the library is properly installed.

One thing that Polyscope does not contain out of the gate is a way to read mesh data. This can be done by using other libraries like Trimesh or Open3D and then using the vertex, face, and normal data from those to create and visualize the meshes in Polyscope. As these libraries are not prerequisites to use Polyscope and in the spirit of providing a clear overview of each library separately, we will load a point cloud of the angel statue in .txt format through NumPy. The point cloud contains the XYZ positions, the colors, and normals for each point in the angel statue. We can load the point cloud using numpy.loadtxt and then separate the array into point positions, colors and normals. These can then be used to generate the Polyscope point cloud by calling register_point_cloud and then adding the color by calling add_color_quantity. The code for this is given below.

To create the rotating sphere around the angel statue, we create a single-point cloud and render it as a sphere with a larger radius. As the angel statue is using additional colors we choose a material that can be blended with the colors, while the rotating sphere uses one that cannot be blended. More information about the materials can be seen HERE. For generating animations, button, mouse, and GUI interactions Polyscope utilizes callbacks, which are executed in the background, without pausing the main thread. As Polyscope does not contain straightforward methods for translation and rotation, we implement a simple method for rotating the angel statue and moving the sphere around it. This is given below.

The main way to update the positions of the rendered points is by calling update_point_positions. For this example, we create a rotation matrix with a rotation around the Y-axis and calculate the dot product with all the point positions of the angel statue. For the sphere, we calculate the new X and Z positions. For both cases, we use the time.time counter. The full code is given below.

For data visualization in 3D, we will utilize the straightforward animation tools provided by Polyscope. We import the dataset using pandas and extract three of the columns — temperature, humidity, and wind speed. To visualize how these three weather features change for each captured point we utilize visualization of edges between data points, or curves as specified in Polyscope. We first visualize the points themselves as a point cloud, with sizes depending on the temperature. We then animate the edges between the points depending on the time they were captured. The code for reading and pre-processing the dataset, plus visualizing the data points are given below.

Temperature, Humidity, and Wind Speed time progression illustrated by a 3D edge plot | Image by the author

Once the initial point cloud is visualized and scaled based on the temperature values using add_scalar_quantity and set_point_radius_quantity, we can invoke the callback at each update cycle to both rotate the camera and rebuild the edges using register_curve_network. The camera is rotated using the built-in method look_at() where the new position of the camera is given and the target of the camera is set to the mean point of the dataset. As we want to build an animation of edges showing how the temperature, humidity, and wind speed change from data point to data point we draw the edges only on a subset of the full data at a time. The Polyscope curve network requires edge data in the form of Ex2, where E is the number of edges with the beginning and endpoint of each edge on a separate row. We again utilize a separate counter value to select the subsets in the callback. The code of the callback function can be seen below.

In the callback, we need to repeatedly remove the previous curve segments and then rebuild them. This is done, as there is no easy way to add elements to an already created curve network. We do this by getting the specific curve network and removing all its parts with get_curve_network("network_name").remove(). We then add the scalar that represents the colors of the temperature for each edge. Once we reach the end of the dataset we reset the counter and start over. The full code for animating the dataset visualization is given below.

Visualization using Pyrender

Pyrender result | Image by the author

Another relatively lightweight but powerful library for visualization and animation is Pyrender. It is built on top of Trimesh and uses it for importing meshes and point clouds. A useful feature of the library is that it comes with both a viewer and an offscreen renderer, which makes it perfect for working in headless mode and for integration in deep learning data extraction, pre-processing, and aggregation of results. Pyrender is built using pure Python and works on both Python 2 and Python 3.

The integrated viewer comes with a set of pre-built commands for easy animation, normal and face visualization, changing lighting, and saving images and gifs. It also comes with a limited but easy-to-use metallic roughness material support and transparency. Finally, the offscreen renderer can also be used to generate depth images.

The library works on Linux, Mac, and Windows. The installation directly uses pip and installs all dependencies together with Trimesh. As usual, we create an Anaconda environment for the library and install it.

conda create -n pyrender_env python=3.6
conda activate pyrender_env
pip install pyrender

Once the library is installed, we can check if everything works by importing itimport pyrender and calling print(pyrender.__version__). If no errors are present the installation is successful. As specified Pyrender does not have a way to import meshes and point clouds, but has built-in interoperation with Trimesh. To load the angel statue mesh, we first use trimesh.load(path_to_angel) to load it, and then we can call pyrender.Mesh.from_trimesh(trimesh_object) to transform it into an object that can be used in Pyrender. We do the same for creating the sphere and ground plane. We first create a UV sphere primitive and box by calling trimesh.creation.uv_sphere() and trimesh.creation.box()from Trimesh and then transform them into Pyrender objects by invoking pyrender.Mesh.from_trimesh() each. Pyrender can render three times of lights — directional, spot, and point lights and two types of cameras — Perspective and Orthographic. For our example, we use a point light pyrender.PointLight() , together with a perspective camera pyrender.PerspectiveCamera(). The code for loading the angel statue, and creating the primitive objects, camera, and light is below.

Pyrender heavily relies on a node structure of its scenes, where each object is added as a separate node to the scene in a Set. Each node can contain meshes, cameras, or lights and these objects can be either explicitly called or referenced through methods for changing their transformation and properties. Each of the created objects is added to a node and their initial positions are set. These nodes will be later referenced when they will be animated. The code for this is given below.

Once all the nodes are created, we initialize the viewer and set up the viewer and renderer flags with dictionaries containing the possible options. To create the animation Pyrender has the method viewer.is_active which returns True until the viewer is closed. We use it to create a while loop where we change the objects’ positions and orientations. We put all the code used for changing the objects between viewer.render_lock.acquire() and viewer.render_lock.release(), which stop and release the renderer, so changes can be done. The code is given below.

In the loop, we utilize the trimesh.transformations.rotation_matrix() to calculate the rotation around the Y-axis of the angel statue. Once the new transformation matrices are calculated they are applied to the specific object by calling scene.set_pose(name_of_node, transformation_matrix). At the end of the loop, time.sleep() is called because without it the viewer does not visualize. The full code for Pyrender is given below.

Pyrender has access to all primitives from Trimesh and can directly create a large number of objects based on 3D coordinates in a NumPy array. We can leverage this to visualize the Temperature, Humidity, and Wind Speed columns of the dataset. We will additionally combine this with visualizing the Wind Direction using capsule primitives. We again use pandas to load the data. We extract the necessary columns and create a NumPy array from them for inputs to Pyrender objects. For generating the wind direction capsules we take the values that are between [0:360], transform them into radians, and input them to the trimesh.transformations.rotation_matrix. For the example, it was selected arbitrarily to rotate the capsules around the world Y-axis. The resultant transformation matrices are saved and used later to both position and rotate the capsule primitives. The code is given below.

Temperature, Humidity, Wind Speed plot, with ‘arrows’ showing the Wind Direction at each point | Image by the author

Once we have imported all the necessary data and have pre-processed the rotations, the Pyrender objects are created and populated from the dataset. A trimesh.creation.uv_sphere() and trimesh.creation.capsule() objects are created. These objects are then used as inputs to the pyrender.Mesh.from_trimesh() method, together with the dataset array as position data. For this, we use the numpy.tile() function to create a 4x4 transformation matrix for each point. For the spheres, we only add the positioning data from the dataset, while for the capsules we also add the calculated rotation_matrices. Each of these thus created point clouds is then added as nodes to the scene. Finally, we create a camera object, add it to a node and give it a position overlooking the mean point of the dataset. We rotate the camera in our case by just invoking the pre-built functionality in the viewer, by pressing the ‘A’ key. To ensure the correct axis is chosen for the rotation we explicitly specify it by giving the viewer a viewer_flag = {"rotation_axis":[0,1,0]}. The full code for visualization of the dataset is given below.

Conclusion

Congratulations on finishing this tutorial article! It was a long read. With the two parts of this article, I hope that more people will use these incredibly useful, versatile, and intuitive libraries to make their data, mesh and point cloud visualizations stand out. Each of the explored libraries has strengths and weaknesses, but together they make an extremely strong arsenal that every researcher and developer can utilize, together with more widely known packages like Matplotlib, Plotly, Seaborn, etc. In the next articles, I will focus on specific use cases like voxelization, feature extraction, distance calculations, RANSAC implementations, etc. that can be useful to both data scientists, machine learning engineers, and computer graphics programmers.

If you want to read more about extracting features from point clouds and meshes, you look at some of my articles on 3D surface inspection and noise detection [2] and [7]. You can find the articles, plus my other research on my Page, and if you find something interesting or just want to chat feel free to drop me a message. Stay tuned for more!

References

  1. Nikolov, I., & Madsen, C. (2016, October). Benchmarking close-range structure from motion 3D reconstruction software under varying capturing conditions. In Euro-Mediterranean Conference (pp. 15–26). Springer, Cham; https://doi.org/10.1007/978-3-319-48496-9_2
  2. Nikolov, I., & Madsen, C. (2020). Rough or Noisy? Metrics for Noise Estimation in SfM Reconstructions. Sensors, 20(19), 5725; https://doi.org/10.3390/s20195725
  3. Nikolov, I. A., & Madsen, C. B. (2019, February). Interactive Environment for Testing SfM Image Capture Configurations. In 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Visigrapp 2019) (pp. 317–322). SCITEPRESS Digital Library; https://doi.org/10.5220/0007566703170322
  4. Nikolov, I.; Madsen, C. (2020), “GGG-BenchmarkSfM: Dataset for Benchmarking Close-range SfM Software Performance under Varying Capturing Conditions”, Mendeley Data, V4; https://doi.org/10.17632/bzxk2n78s9.4
  5. Nikolov, I.; Madsen, C. (2020), “GGG — Rough or Noisy? Metrics for Noise Detection in SfM Reconstructions”, Mendeley Data, V2; https://doi.org/10.17632/xtv5y29xvz.2
  6. Nikolov, I., Philipsen, M. P., Liu, J., Dueholm, J. V., Johansen, A. S., Nasrollahi, K., & Moeslund, T. B. (2021). Seasons in Drift: A Long-Term Thermal Imaging Dataset for Studying Concept Drift. In Thirty-fifth Conference on Neural Information Processing Systems; https://openreview.net/forum?id=LjjqegBNtPi
  7. Nikolov, I., & Madsen, C. B. (2021). Quantifying Wind Turbine Blade Surface Roughness using Sandpaper Grit Sizes: An Initial Exploration. In 16th International Conference on Computer Vision Theory and Application (pp. 801–808). SCITEPRESS Digital Library; https://doi.org/10.5220/0010283908010808

--

--

I am Ph.D. in Computer Graphics, Computer Vision, and Interactive Systems. Check out my personal website for more info — https://ivannikolov.carrd.co/