A while ago, I saw this note I wrote to myself of a future project to work on: a 3D visualization of a video. I had been thinking about how videos are basically sequences of 2-dimensional images, comprised of millions of pixels arranged into different x and y positions, and any video can therefore be represented as a 3-dimensional series of numbers, where the timestep is represented by the z axis. Each frame of the video could be visualized as a different slice of a 3D object, most likely a cube or rectangular prism. More realistically, a video with three color channels (RGB) is 4-dimensional (with a resolution of 3 on the w axis), but we can display it in 3D by displaying each pixel of the video at a different physical point in space with its actual color.
In theory, it sounded like a cool idea: a 3D object where each “layer” of the object is a different frame of the video. But as usual, actually doing it was a much more difficult process than I initially thought it would be.
For testing, I used this lovely drone video of snow-covered mountains from Pixabay. I thought it would work well because it contains gradual motion but no sudden changes, so the content of the video can be seen moving over time as the camera position changes. I read that Blender loads a series of images better than a single video file, so I first had to convert the video to individual images. I first tried using the node compositor to do this, but I ultimately had more success with the Blender video editor.
I found that PNG images took up far too much storage space, so I exported the video as a series of 420 JPEG image files1 with a slight level of compression, vastly reducing their file sizes.
My first idea was to use a custom shader that took advantage of Blender’s advanced node editor to render different segments of the object as different video frames based on the location of those segments in three dimensions. Using the Object Info and Separate XYZ nodes, I could automatically use the vertical (z) location of each rendered part of a cube’s surface as the current frame number of the video.
This didn’t work quite as well as I had hoped it would. The main issue was that the image texture node (which also supports animated/video textures) does not allow the current frame number to be input using the output from another node, which meant that I could not use the surface coordinate information to select a video frame to use as a texture for a given part of the object. I found the Time Node, which allows for using the current from the timeline as an input, but I could not find a way to use this data as an input for a video texture.
I decided that a better solution would be to create a Python script to add individual objects to represent each frame of the video, add image textures to each one that corresponded to the correct frame of the video, and combine all of these slices. Blender has an extremely powerful Python interface that allows for scripting and automation of almost any process that could be performed manually in Blender: creating objects, adding textures, manipulating material nodes, manipulating scenes, rendering, and more. Until now, I had done very little with Python and Blender, but this was a great way to learn more about the API’s capabilities.
The first thing to do was to load the necessary image texture into Blender so it could be used to texture the object. The filepath for the image can be generated by concatenating the path of the folder and the name of the image file itself, which is given in the format
# Name of image file name = str(i).zfill(4) + ".jpg" # File path filepath = settings.file_path + name
Next, the image is loaded:
# Load image into scene image = bpy.data.images.load( filepath, check_existing=True )
Now that the image is detected by Blender, we can create the cube that will be textured with it. This can be done with
primitive_cube_add. Note that the height (z) value of the object is determined based on the provided width of each slice.
# Add new cube to scene (one video frame) bpy.ops.mesh.primitive_cube_add(location=( 0, 0, i * (slice_thickness / 100 * 2) ))
Next, the cube must be resized based on the value of the
slice_thickness property. This transforms the cube from having equal width, length, and height dimensions to being a thin “slice”. The x and y dimensions of the cube are also changed to reflect the dimensions of the video frame.
# Resize cube to be thinner bpy.ops.transform.resize(value=( size / slice_size, size / slice_size, slice_thickness / 100 ))
# Set rendering engine to Cycles bpy.context.scene.render.engine = "CYCLES"
A material must be created to allow us to place the texture onto the 3D object; originally I used Blender textures, but these are meant to be used with the Blender Render engine and not Cycles. Instead, a material is automatically created and added to the object, and the texture is added with material nodes.
# Check if material exists mat = bpy.data.materials.get(str(i)) if mat is None: # Create new material if none exists mat = bpy.data.materials.new(name=str(i)) # Set material to use node editor mat.use_nodes = True; # List of nodes in material node tree nodes = mat.node_tree.nodes # Remove all nodes from material for node in nodes: nodes.remove(node)
Next, the appropriate nodes are added to the material and repositioned so as to be more readable.
# Add nodes output = nodes.new("ShaderNodeOutputMaterial") output.location = 500, 0 diff = nodes.new("ShaderNodeBsdfDiffuse") diff.location = 0, 0 texture = nodes.new("ShaderNodeTexImage") texture.location = -250, 0 coord = nodes.new("ShaderNodeTexCoord") coord.location = -500, 0 trans = nodes.new("ShaderNodeBsdfTransparent") trans.location = 0, -250 mix = nodes.new("ShaderNodeMixShader") mix.location = 250, 0
The correct image file used for the image texture is set for each layer’s individual material.
# Set source image for texture texture.image = bpy.data.images[str(i).zfill(4) + ".jpg"] mix.inputs.default_value = 0
Links are then created between the outputs and inputs of specific nodes to form the finished material.
# Create links between material nodes mat.node_tree.links.new( texture.inputs["Vector"], coord.outputs["Generated"] ) mat.node_tree.links.new( diff.inputs["Color"], texture.outputs["Color"] ) mat.node_tree.links.new( mix.inputs, diff.outputs["BSDF"] ) mat.node_tree.links.new( mix.inputs, trans.outputs["BSDF"] ) mat.node_tree.links.new( output.inputs["Surface"], mix.outputs["Shader"] )
This produces the following node setup:
I’m experimenting with different material combinations to best show each layer of the video; for now, I’m sticking with solid diffuse. Generated texture coordinates are supplied to the image texture node, so the content of the video frame is displayed on the top and bottom faces of each layer and the sides are the same as the edge pixels of the frame.
To finish, the material is assigned to the object in the correct material slot:
# Assign material to object if ob.data.materials: ob.data.materials = mat else: ob.data.materials.append(mat)
This process is repeated for each frame of the video. All the layers are then merged into one complete object. The finished object has one material for each frame of the video.
This object can be manipulated, duplicated, or rendered as needed. The finished result looks something like this:
The top of the object is the final video frame. The sides of the object show the gradient between the edges of different frames over time, creating an interesting effect. As the camera moves upward, more of the skyline can be seen; this is shown towards the top of the block as the bright streak on the side expands. The content of the bottom edge of the video frame also changes over time, producing a quite nice texture on the left side of the object in this screenshot. This is what it looks like when rendered:
It can also be compressed into a more cube-like shape, which looks interesting:
I also wanted to make the script into an installable add-on so that people could have it automatically enabled in all of their Blender projects without having to manually execute the script each time they wanted to use it. Additionally, Blender add-ons have user interfaces that allow for anyone running the script to easily interact with it and manipulate settings without diving into the variables in the code.
I based the code off of the example add-on described in this example. This mostly involves adding a bunch of code to define the UI and rendering the actual components of the add-on that users can interact with.
class BasicMenu(bpy.types.Menu): bl_idname = "OBJECT_MT_select_test" bl_label = "Select" def draw(self, context): layout = self.layout class OBJECT_PT_my_panel(Panel): bl_idname = "OBJECT_PT_Video_Cube" bl_label = "Video Cube" bl_space_type = "VIEW_3D" bl_region_type = "TOOLS" bl_category = "Create" bl_context = "objectmode" def draw(self, context): layout = self.layout scene = context.scene settings = scene.video_cube layout.prop(settings, "max_slices") layout.prop(settings, "slice_thickness") layout.prop(settings, "slice_size") layout.prop(settings, "file_path") layout.operator("wm.video_cube_generate", icon="MOD_UVPROJECT")
I decided to call the add-on “Video Cube” because it turns a video into a cube. I’ve won awards3 for my spectacular creativity.
I added a field to limit the maximum number of frames that would be loaded, regardless of the number of image files present in the source folder. The “Slice Thickness” and “Slice Size” settings allow the user to control the thickness of each layer and its dimensions, respectively. There is a text field that lets the user select the folder to pull images from, based on this code. Finally, the “Generate” button executes the actual script, which is embedded in the add-on. This is what the actual settings panel for the add-on looks like:
Normally Blender add-ons are provided as
.zip files, but I found out that zipping is not required and scripts can be imported as individual
.py files (though it does make installing an add-on with many files much easier.) I opened the Python file that contained the script code, then checked the box to enable the add-on. I was met with an error:
AttributeError: '_RestrictData' object has no attribute 'filepath'
Based on similar problems other people have had, it seemed to be an import error, so I tried adding another line to import the specific module I was using to generate absolute file paths (abspath); this didn’t fix the issue. I then looked at the call stack and found that a statement in the actual module code that detected function parameters for abspath was being triggered. I played around with the file path code some more, and eventually discovered that the error was caused by me trying to activate the add-on in a new
.blend file that had not yet been saved, and therefore did not yet have a file path. I tried to find a way to check if the currently open file had a path, but was unable to and ultimately removed the relative file path.
You can install the add-on by downloading this Python file, then opening Blender and pressing
CTRL + ALT + U to open the User Preferences window. Go to the “Add-ons” section, then click the “Install from File…” button at the bottom of the window. Navigate to the script file, select it, and click “Install from File…” in the upper right-hand corner of the window. Then, just click the check box next to the add-on to enable it. A new section titled “Video Cube” will appear at the bottom of the “Create” section of the toolbar on the left of the screen in the 3D view.
Once the add-on is installed, just select whatever folder of images you want to use and press the “Generate” button. Something to keep in mind – it may take a while to import all the images and generate the object, especially if you have a large number of images, and rendering may also take some time. You may want to start with smaller image sets and add more later. Enjoy!
I’ve created a new GitHub repository for this add-on and other assorted Blender scripting projects. The code for this specific project is available here; feel free to experiment with it and send me a pull request or issue if you discover anything interesting and want it added, or find any problems with the code.
This was a really fun project to work on and I learned a lot about Blender scripting (and Python). I would say it turned out fairly successfully, and I’m looking forward to future projects with Blender and Python. Have a great holiday season!
- Yes, it just happened to be 420 files based on the framerate of the video and the way I sped it up. It wasn’t intentional. Unlike this guy and his electric cars.
- This is actually done near the beginning of the script, but the post flows a little bit better if it’s here.
- *an award