Blender Logo

Blender is a beast. It is one of the most impressive feats of FOSS ever created.

Contents

Resources

Technical And Architectural Modeling

  • I’m pretty impressed with this Construction Lines add-on — here’s the author’s site. This focuses on construction lines but also seems to cover things obvious to CAD people like rational copying and moving with base points. Not free but it is worth $7 just to use to illustrate what CAD users find frustrating about Blender.

  • Clockmender’s CAD functions also show a frustration with normal Blender. Here is Precision Drawing Tools from the same source. Looks like TinyCAD is joining this project.

  • tinyCAD Mesh Tool Add-on - other people are frustrated by Blender not living up to its potential. Has some simple useful geometry helpers. Couldn’t find this in stock add-ons — suspect it’s for 2.8+. Still interesting.

  • Mechanical Blender is trying to bring sensible technical features to Blender. Looks early (or dead) but worth watching.

  • A nice site showing the potential of architecture in Blender.

  • This is the best video I’ve seen demonstrating good techniques and practices for dimensionally accurate technical modeling.

  • Measureit is an amazingly powerful add-on that allows full blueprint style dimensioning. This video, a continuation of the technical modeling video just mentioned, is the best comprehensive demonstration of it (in 2.8+).

  • Another superb demonstration of technical modeling in Blender is this video showing how to model a hard surface item directly from accurate measurements.

Instruction

The only problematic thing about Blender is that, like many of the best tools for professionals, it is horrendously difficult to learn. Fortunately there are some good resources to help.

Screencast-keys is a non-standard addon, found here, that will highlight what keys are pressed so that observers can follow along. I think it could be helpful for knowing what key you accidentally might have pressed. Note I haven’t tried this.

Making Illustrative Animated Gifs

Making tutorials or trying to post on BlenderSE? Want fancy illustrative gifs? Look for "Animated GIFs From Screen Capture" in my video notes.

Models

Why start modeling from scratch when you can use someone else’s model that they want you to use? I don’t yet know which of these sources suck and which ones are good, but I’m listing them here for completeness. I got the following list from here.

Textures

Installation

It used to suffice to just sudo apt install blender and a Debian system was happily ready to use Blender. But today, it really is not sensible to use a pre-2.8 version and unfortunately Debian will probably be mired in that for a long while. So you probably will need to go to https://blender.org/download and download their package. Click the "Download Blender 2.83.2" button or whatever version is current. You can then find the real and proper link down at the message: "Your download should begin automatically. If it doesn’t, click here to retry." It’s helpful to know that if you’re trying to install it remotely.

This technique avoids locally saving the archive and might be helpful too for cutting through fluff:

cd /usr/local/src
URL="https://mirror.clarkson.edu/blender/release/Blender2.83/blender-3.5-linux-x64.tar.xz"
wget -qO- $URL | sudo tar -xvjf -
cd ../bin
sudo ln -s ../src/blender-3.5/blender blender3.5
sudo ln -s blender3.5 blender

Now when you type blender, a proper modern version should start up. Make sure /usr/local/bin is in your $PATH.

Set Up And Configuration

Here are some things I like to change in a default start file.

  • Get rid of the cube. Or not. It’s kind of idiomatic tradition at this point. Just learn to press "ax<enter>". But getting rid of it is an option. It does actually serve a purpose to alert you to the fact that you have a brand new untouched project.

  • Put the default camera and light in a collection called "Studio" and turn off its visibility. This makes it easier to delete it if you really don’t needed but also easy to see the default view if that’s helpful.

  • Get rid of .blend1 files. They seem superfluous if you have good habits. (I have never used one.) See Preferences → Save & Load → Save Versions. I’d keep it — why not? — if I could put the path in /tmp or something like that. This guy also was annoyed at Blender’s unhousebroken incontinence and wrote an addon to specify a litter box. Maybe overkill.

  • Nerf F1 with a key reassignment to prevent opening browsers intended for help documentation. Uncheck Preferences → Keymap → Key Binding → type "F1" → Window → View Online Manual.

  • n shelf open. If you’re going to hide one, the t shelf is near useless.

  • Rotation point around 3d-cursor (".6").

  • Vertex snapping, not increment.

  • Viewport overlays → Guides → Statistics. On. (Or right click version in lower right.)

  • Start with plan view. Most of my projects start off with some sensible orthographic geometry. Another good reason is that while it is not the iconic Blender default cube typical start view, it is easier to reproduce. So "`8".

  • Output properties → Dimensions → Frame End. 240 (250 at 24fps is just stupid.)

  • Output properties → Output → Color. RGBA.

  • Output Properties → Output = /tmp/R/f (R for render.) This will create (including parents) /tmp/R/f0001.png, etc.

  • System → Memory&Limits → Undo Steps. Change from 32 to 128.

  • System → Memory&Limits → Console Scrollback Lines. Change from 256 to 1204.

  • Scene Properties → Units → Unit System = Freedom. Actually I’ve been having better luck with mm recently; still not a default.

  • Show Normals size .005 (or .05 with mm). Find this setting by going into Edit Mode (you’ll need that cube!) and then going to the viewport overlays menu that’s next to the "eclipse" looking icon on the top right bar.

  • Change all default generation sizes from 24" to 1". This may not stick and I don’t know how to make it permanent. :-( At least in mm, 24 is about 1"! :-)

  • Clipping values are usually a bit narrow by default. n menu → View Tab → View Section → Clip Start/End. I’m going with 1mm and 1e6mm. Using something less that 1mm seems to cause full time rendering confusion.

  • Perhaps fill out some custom 2d material properties. Even a couple of templates for "Color-FilledStroke" "Color-OnlyStroke" or something like that.

  • Add to quick q menu: Save Copy

  • Add to quick q menu: Preferences

  • Preferences → Animation → F-Curves → Default Interpolation → Linear

  • Preferences → Keymap → Preferences → 3d View → Extra Shading Pie Menu Items

  • Preferences → Input → Keyboard → Default To Advanced Numeric Input (allows stuff like "gx10/3" and useful on-the-fly math).

  • Rebind number keys to not hide collections! Go to Preferences → Keymap → Name and enter "Hide Collection". Uncheck the lot.

  • Preferences → Input → Default to Advanced Numeric Input - the plus side is you can do stuff like "gx100/25.4" and it will do a sensible thing. The bad part of this is that minus is a literal subtraction and no longer a shortcut for "oops, I got unlucky choosing the sign of my numeric operation - please fix that".

Addons I like.

  • 3d View: MeasureIt

  • 3d View: Precision Drawing Tools - perhaps fix the nomenclature file to be better.

       $ grep xed /usr/local/src/blender-2.93.3-linux-x64/2.93/scripts/addons/precision_drawing_tools/pdt_msg_strings.py
       PDT_LAB_DEL = "Relative" # xed - was "Delta"
       PDT_LAB_DIR = "Polar"  # xed - was "Direction"
  • Add Mesh: Extra Ojbects

  • Mesh: mesh_tinyCAD

  • Import AutoCAD DXF

  • Import Images as planes

Other Addons to consider.

  • 3d View: Stored Views

  • Object: Bool Tools

  • Object: Align Tools

  • Add Mesh: Bolt Factory

  • Mesh: 3d Print Tool Box

  • Interface: Modifier Tools

  • Mesh: F2

  • Mesh: Edit Mesh Tools

  • Export AutoCAD DXF

  • External: CAD Transform for Blender Mine is named: lcad_transform_0.93.2.beta.3 A good demo. Note that this addon hides over on the left in the menu activated with "t"; it is labeled as "CAD" with a green cube.

I also had a very elaborate start up procedure for doing video editing setup. See below for that.

I found that I prefer non-noodley noodles in the node editor. Here’s how to make sure that happens.

Edit → Preferences → Themes → Node Editor → Noodle Curving → 0

Disabling Emoji And Unicode Conflicts

Since Blender exhaustively uses every possible key combination, you can’t have obscure ones that you never use lurking around in some miscellaneous interface features. My window manager is pretty good about never needing anything that doesn’t use the OS key, but I recently discovered that [C][S]-e seems to be bound to a new-fangled system feature of "ibus". (This is interesting because Gimp seems to know how to override it when using this combo for "Export".)

It looks like the way to cure this is to run ibus-setup and go to the "Emoji" tab. Click the "…" button; click the "Delete" button; click the "Ok" button.

It might be smart to get rid of the Unicode one too (which is [C][S]-u by default).

Maybe these could be rebound to the OS key if they’re needed some day. I’m not sure that’s even possible but it is what makes sense.

3-D Modeling

X is red, Y is green. Right hand coordinates, Z is up and blue.

Vertices and edges are fine (though weird that you can’t just spawn those without enabling the Extra Objects addon), but faces are kind of weird. There’s something kind of vague about them. They can have 3 edges (tris), 4 (quads), or a whole bunch. So what’s going on? This official Blender design document explains exactly how it all works.

Units

Here in the USA many things are measured in Freedom Units. Blender can play along. Go to the properties tabs and look for the "Scene Properties" tab which will be a little cone and sphere icon. The second item is "Units". Choose "Imperial" and "Inches" and things should be fine.

Importing From CAD

Sometimes I need to get CAD models imported into Blender and that isn’t always easy. One important trick is running the models through FreeCAD. Try exporting to STEP files from the CAD program and having FreeCAD import that and export STL files. Blender can then import those pretty well.

Window Layout

Full official details are here and actually helpful.

The old way to split areas into subdivided areas: drag the upper right corner down (horizontal split) or left (vertical split). To combine areas, make sure they are the same format (e.g. one big pane doesn’t go right into 2 horizontally split panes) and drag the upper right corner up or to the right.

In 2.8+ things seem much easier to arrange. Just go to the edge of the viewport you’re interested in fixing and look for the left-right or up-down arrow icons. Once those are visible, you can use RMB to bring up a menu that will allow you to "Vertical/Horizontal Split" or "Join Areas". Also look at the "View → Area" top button menu for another easy way to split. This allows you to customize easily without the drama of old Blender.

A huge tip for people with multiple monitors is that you can detach windows. In old Blender this is done with the same drag of the upper right corner, just hold down Shift first. Which way you drag doesn’t matter—the window is just cloned with its own window manager decoration.

In 2.8+ there is a similar technique with [S] and dragging the plus to the left or something like that. You can also use the "Window" button on the top row menu to create a "New Window" — you may need to do a lot of shuffling after that though. Another way is to use the "View" button on the top row menu and then "Areas" (which is what these Blender function regions are called) and then "Duplicate Area into New Window". This will usually duplicate the wrong window but at least it only gives you one area to reset the way you want it.

Mouse Buttons

  • LMB

    • Select (in 2.8+). Click multiple times to cycle through any ambiguous selection.

    • [S]-LMB - expand selection explicitly.

    • [C]-LMB - expand selection inclusive or "Pick Shortest Path". E.g. LMB click one vertex and then [C]-LMB a few vertices over and all in between will be selected.

    • [A]-LMB - Select edge rings and face loops. The axis depends on which component is closest to mouse pointer when clicked.

    • [A]-LMB - (OM) Bring up a selection box for ambiguous selection. For example if there is an object inside of another object, holding [A] while LMB clicking in the general direction will bring up a menu of the possible objects.

    • [S][C]-LMB - expand selection inclusive but entire area on 2 axes. In Shader/Node editor (with Node Wrangler) this inserts a "Viewer" node into the graph to preview or diagnose what that step is producing.

  • MMB

    • (Middle Mouse Button) rotate (orbit) view. In draw mode, accepts drawn shapes.

    • [C]-MMB - scale view (zoom/move view camera closer)

    • [S]-MMB - pan view (translate), reposition view in display or, as I like to think of it, "shift" the view

    • [S][C]-MMB - Pan dolly a kind of zoom along your view

    • [A]-MMB - Center on mouse cursor. Drag to change among constrained ortho views.

  • RMB -

    • Vertex/edge/face context menu (EM). Object context menu (OM).

    • [C]-RMB - Select intermediate faces to mouse cursor automatically when certain EM geometry is selected. So for example, you can get A1 on a chessboard, hold [C] and right click on A8 and get the entire A column. Also, if you’re in some kind of edit mode where you’re extruding over and over again, [C]-RMB will extrude to the current mouse position. In Shader Editor with Node Wrangler, this brings out the edge cutting scissors.

    • In Shader Editor with Node Wrangler, this does a quick connect from one node to another to the implicit hook up point.

    • [C][A]-RMB - In draw mode, starts a lasso which will encircle items for deletion. In Shader Edit mode with Node Wrangler this does a semi-automatic hook-up presenting you with a choice dialog of the possible hook-up points; so another step, but finer control.

  • Scrollwheel - A mini 1-axis track ball on your mouse! Brilliant!

    • SWHEEL - Scales time line.

    • [C]-SWHEEL - Pans the time line.

    • [A]-SWHEEL - "Scrubs" through time line, i.e. repositions current frame.

Keyboard Shortcuts

There are 1000s. Here I will try to enumerate the ones I’ve encountered. Also remember that you can go to Preferences and select the "Keymap" section and learn a lot about what the currently configured bindings (and possibilities) are. Probably best to not go too wild with changing those unless you really know what you’re doing. Note that the tooltips often have the keybinding for the operation and an extra interesting hint is that if you turn off tool tips in the Prferences → Interface you can get them at any time by holding Alt and hovering over the feature. F3 searching also shows the keybinding when available.

  • [Shift]

    • In draw mode, constrains lines and shapes to orthogonal axes or equal dimensions. Does some other mysterious smoothing thing with freehand draw mode; see [A] for constraining freehand drawing.

    • While performing transformations that can use a mouse input (g, r, s, etc), shift will slow down motion.

    • Holding shift will slow down how fast values are changing when modifying a value by moving a slider.

  • [Control]

    • Snapping. Hold [C] or use toggle snap property button. In draw mode, holding [C] causes drawing actions to erase per the erase tool’s settings.

    • Plus LMB in video editor will select all strips after your cursor. This also works for keyframes in timeline type interfaces.

  • [Alt]

    • When drawing in draw mode, constrains to orthogonal axes - a bit janky. Draw mode shapes become centered at the start point, even lines.

    • When making an ambiguous selection in object mode (perhaps others) it will cause a "Select Menu" to appear where you can choose which of the objects you were after. Useful for things like nested objects inside of another.

    • Loop selecting with LMB in edit mode.

    • Scrub timelines with MMB; need to double check what modes this works in.

  • F1 F1

    • "Help" - Good to nerf this with a keyboard reassignment if a normal program’s idea of a normal web browser is not going to work for you (ahem).

    • [S]-F1 - File browser.

  • F2

    • Rename selected (OM). Only last (bright orange) if multiple selected.

    • [S]-F2 - Movie clip editor.

    • [C]-F2 - Batch renaming in the outline (search and replace). (Hmmm… Not working for some reason… Search for "Batch Rename" in F3. Or interestingly it looks like Blender is reading the keysym at a very deep low level; this means that you must use the genuine original Control key instead of a perfectly sensible remapped key. Thankfully this seems the only situation where this is a problem.)

  • F3

    • Open search form/menu.

    • [S]-F3 - Texture node editor. Press again for shader editor. Again for compositor.

  • F4

    • File menu including Preferences.

    • [S]-F4 - Python console.

  • F5-F8

    • In theory these are reserved to be defined by the user.

    • [S]-F5 - 3D viewport.

    • [S]-F6 - Graph editor. Press again for drivers.

    • [S]-F7 - Properties.

    • [S]-F8 - Video sequencer.

  • F9

    • Bring up modification menu for last operation. E.g. adjust bevel segments.

    • [S]-F9 - Outliner.

  • F10

    • [S]-F10 - Image editor. Press again for UV editor.

  • F11

    • View render.

    • [S]-F11 - Text editor.

    • [C]-F11 - View animation.

  • F12

    • Animate single frame.

    • [C]-F12 - Animate all frames of animation sequence. Note that this does not work if you use a remapped CapsLock as your Ctrl. There are a couple of bindings like this where the remapped control is not good enough for some weird reason and this is one of them. Use the native Ctrl key and it will work.

    • [S]-F12 - Dope sheet.

  • PgUp,PgDn = Page Up, Page Down

    • PgUp,PgDn - In NLA editor, moves tracks up or down.

    • [C]-PgUp - Change top view port configuration tab (Layout, Modeling, Sculpting, UV Editing, etc.)

    • [S]-PgUp - In NLA editor, moves tracks to top (or bottom).

  • "."

    • Pivot menu. Like the one next to the snap magnet icon. With something selected in the outliner, "." in the 3d editor will hunt for it in the model. With something selected in the model, "." from the outliner will find it in the outliner.

    • [C]-. - Transform origin, which means fully edit the origin point as if it were an object independently of its object’s geometry. Extremely powerful and useful!

  • ","

    • Orientation.

  • [`]

    • Ortho view pie menu removing the need for a numpad.

  • "~"

    • Pie menu. For what normally? Solves no numpad for views.

    • [S]-~ - Fly. (No idea what this means.)

  • [/] - Zooms nicely onto the selected object to focus attention on it. Press again to return to previous view. In Node Wrangler, inserts a reroute node (basically a bend or bifurcation in an edge).

  • [=]

    • [S]-= Organizes nodes when using Node Wrangler.

  • arrows

    • Used with "g" movements, moves a tiny increment (sort of one pixel).

    • <LRarrows> - In animation modes, go to next and previous frame.

    • [S]-<LRarrows> - In animation modes, go to first and last frame.

    • [S]-<UDarrows> - In animation modes, go to next and previous keyframe.

    • [C]-<LRarrows> - Move by word (Text Mode).

  • [space]

    • Start animation.

    • [S]-space - Menu.

    • [C]-space - Toggles current viewport to fill entire workspace (or go back to normal).

    • [C][S]-space - Start animation in reverse.

    • [C][A]-space - makes current viewport fill entire workspace with no menus.

  • [backspace]

    • Resets to default value when hovering over a form box field.

    • [C]-backspace - resets the single component of a property to default values (for example just the X coordinate rather than X,Y, and Z).

  • [tab]

    • Toggle object and edit mode. [C]-tab - full mode menu (pose, sculpt, etc). When splitting viewport windows, tab dynamically changes between vertical and horizontal splits in case you have second thoughts. In draw mode if you’ve just created two points for a line leaving the yellow dots, tab will go back to the edit mode to give you another chance at that endpoint. In dope sheet, locks current layer.

    • [C]-tab - Opens a pie menu for selecting from all of the modes (Edit, Object, Pose, etc.). Very useful for more complex things. Also opens the graph editor from the dope sheet.

  • [home]

    • Zoom extents.

    • Beginning of text (Text Mode).

    • [C]-home - Set start frame.

  • [end]

    • [C]-end - Set end frame.

    • End of text (Text Mode).

  • [0]

    • [C][A]-NP0 - Position camera to be looking at what your viewport is looking at. Wish I knew how to do this without a NumPad. Ah, how about n menu → View → View Lock → Lock Camera To View.

  • [1]

    • Applies to all number keys! In Object Mode the behavior of the number keys is essentially a bug. It’s fine for small projects where nothing is even noticed, but for a large project, you’re likely to demolish your entire project in a way that can’t be restored with "undo". What it does is hide all collections but the nth collection (first for 1, second for 2, etc). This may seem minor and unimportant but if you have dozens of nested collections right away the ordinal order of which number goes with which collection is ambiguous. My experiments show it is first counting only the top level collections. Only after all the top level connections are put into a number will nested collections start being assigned numbers (a breadth first search, not a depth first search). Note also that it is easy to get unintuitive ordering: for example, if you rename your second collection C and your third collection B, 3 will preserve the visibility of B which may not be ideal. Actually, how things are ordered in the Outliner is, to me and others, a complete mystery. What is completely inexcusable for keys so easily mistyped, is that there is no way to undo it. You must manually recreate the visibility layout you had established, perhaps painstakingly, throughout your project. Sometimes [A]-h —  in the outliner only! — can be beneficial, but since this unhides everything, you’ll need to go through and hide the things you really wanted hidden; this may be preferable to unhiding the things you want to see and less likely to overlook a nested thing (you have to decide if you want a subtle small correct nested subcomponent thing missing from your render, or a stupid construction-only reference prop to sneak in). The best way to handle this is to disable these bindings. Search for Preferences → Keymap → Name → "Hide Collection" and uncheck all of those. If that seems like essential functionality would then be missing, just use the much easier to remember and more sensible [C]-h which brings up a menu of collections to hide so you don’t even need to guess what goes with what. And normally it’s more sensible to use [S]-1 to just toggle that first collection’s visibility. After all if you are interested in using the number keys for hiding collections, you can’t possibly have more than 10 anyway!

    • Vertex select mode (Edit Mode).

    • [S]-1 - Adds vertex select mode to any others active. Multiple modes are valid.

  • [2]

    • Edge select mode (Edit Mode).

    • [C]-2 - Subdivide stuff in some fancy automatic way. This makes a cubic based sphere when done ([C]-2) with the original default (or any) cube selected.

    • [S]-2 - Adds edge select mode to any others active. Multiple modes are valid.

  • [3]

    • Face select mode (Edit Mode). In Object mode something happens, not sure what, but stuff disappears. 2 seems to undo it.

    • [S]-3 - Adds face select mode to any others active. Multiple modes are valid.

  • [a]

    • Select all. When editing an object or mesh with snapping active (so possibly [C] being held too) pressing a when the orange snap target circle is present will "weight" that snap point; an example of how to use this is to press a when snapped to one edge endpoint, then again over the other endpoint, allowing you to snap to the midpoint.

    • aa - Deselect all.

    • [S]-a. Add object to selection.

    • [C]-a - Apply menu, to apply scale and transforms, etc (OM).

    • [A]-a. Select none. (Similar to "aa" or "a + [C]-i".)

  • [b]

    • Box select (EM). Box mask (SculptMode).

    • [S]-b - Zoom region, i.e. zoom view to a box.

    • [C]-b - Bevel edge. Bind camera to markers (timeline). When a camera is selected, sets render region border box; related to Output Properties → Dimensions → Render Region. In the time line if you have a marker selected, [C]-b attaches the currently selected camera to the marker so you can switch cameras in the animation.

    • [C][S]-b - Bevel a corner. When "Bool Tools" addon is enabled, you can select the helper object and then the one you’re serious about (in that order) and this will bring up the quick boolean operator menu.

    • [C][A]-b - Clear render region.

    • [A]-b - Clipping region, limit view to selected box (i.e. drag a selection box first). Repeat to clear. Good for clearing off some walls on a room so you can work on the interior of the room.

  • [c]

    • Circle (brush) select (EM). Clay (Sculpt Mode).

    • [S+c] - Center 3d cursor (like [S]-s,1) and view all (home). Crease (SculptMode).

    • [C+c] - Copy. Note that this can often be used to pick colors (and other properties) by hovering over a color patch and then pasting it elsewhere. [C+S+c] - In Pose Mode, brings up the bone constraint menu which can be handy when copying an IK rig to an FK rig etc; select the from, then shift select the to, then [C+S+c] and choose Copy Rotation (or transform maybe) and [C+a] in the constraint window to apply it permanently.

    • [A+c] - Toggle cyclic (EM of paths).

  • [d]

    • Hold down d while drawing with LMB to annotate using the annotation tool. Holding down d with RMB erases.

    • [S]-d - Assign a driver (Driver Editor).

    • [C]-d - Dynamic topology (Sculpt Mode).

    • [S][C]-d - Clear a driver (Driver Editor).

    • [S]-d - Duplicate - deep copy.

    • [A]-d - Duplicate - linked replication.

  • [e]

    • Extrude. [C]-RMB can extend extrusions once you get started. Continue polyline and curve segments in draw mode. Brings up the tracking pie menu in the Movie Clip Editor’s Tracking mode? Set end frame to current position while in timeline. Also in the dopesheet, when pressed while hovering the pointer between some key frames, this may allow you to move all of the selected ones on the pointer side of your current position; so, set your frame position cursor, select the keyframes to consider moving (perhaps [a]), put the pointer over the side you want to move, then pres [e], then adjust the position of that side’s keyframes all at once (a demonstration). Note that [shift+t] is similar but squishes them as you move reposition.

    • [S]-e - Edge crease.

    • [S][C]-e - Interpolate sequence in Draw Mode - pretty useful actually since this needs to be done a lot and it’s buried deep in the menus (at the top).

    • [C]-e - Edge menu. Contains very useful things like "bridge edge loops". Graph editor easing mode.

    • [A]-e - Shows extrude menu. If you hover over the gradient of the "ColorRamp" node and press [A]-e, you’ll get an eyedropper which can be used to populate the ramp’s value; can be clicked multiple times for multiple colors.

  • [f]

    • Fill - Fill. Creates lines between vertices too, e.g. to close a closed path. See j for join. In node editor "f" will automatically connect nodes, as many as selected. In a brush mode such as sculpt, weight, texture, etc., f changes brush size.

    • [C]-f - Face menu. Hmm. Or maybe a find menu???? Weight value, which is one of those things like brush size in weight painting mode.

    • [S]-f - Brush strength. Try WASD for controlling?

    • [A]-f - In Edit Mode for bones, switches the direction of the bone, the tail and tip exchange places.

  • [g]

    • Grab - Same idea as translate/move. I think of it as "Go (somewhere else)" or maybe "grab".

    • gg - Pressing gg (twice) edge slides along neighboring geometry.

    • [S]-g - Select similar — normals, area, material, layer (grease pencil) et al. — menu.

    • [C]-g - Vertex groups menu (EM). Create new collection dialog (OM) — though I can’t figure out how to actually do anything with this. In the Shader/Node editor, combines selected nodes into a node group.

    • [A]-g - Reset position of object. Remove bone movements in Pose Mode.

    • [C][A]-g - Shader/Node editor ungroup grouped nodes.

  • [h]

    • Hide selected. In draw mode, hide active layer. Disable strips in the VSE and NLA editors. In the node editors it will collapse (hide) the node to be as small as possible (see [ctl+h]).

    • [S]-h - Hide everything but selected. In draw mode, hide inactive layers.

    • [C]-h - Hooks menu. In a node editor with a node selected this will trim down the node so that it just shows relevant link connections (as opposed to the full "hide").

    • [A]-h - Reveal hidden. In draw mode, reveals hidden layers. In VSE and NLA, unhides tracks

  • [i]

    • Inset (EM) with faces selected. Press i twice for individual face insets. Insert keyframe (OM/Pose). Inflate (Sculpt Mode).

    • [C]-i - Invert selection.

    • [A]-i - Delete keyframe (OM/Pose).

  • [j]

    • Join (EM). With "fill" between two opposite vertices of a quad, you get the edge between them but the quad face hasn’t changed. Join will break that quad face up. The subdivide function can do the same thing - access at the top of the EM context menus (RMB). Mysteriously change slots in image editor during render preview.

    • [C]-j - (OM) Joins two objects into one. Also joins grease pencil strokes. In the Node Editor, puts nodes in frames.

    • [A]-j - Triangles to quads (inverse of [C]-t).

  • [_k _]

    • Knife tool. Hold [C] to snap (e.g. to mid points). Snake hook (SculptMode).

  • [l]

    • Select all vertices, edges, and faces that are "linked" to the geometry the mouse pointer is hovering over. Or edges the same way. Layer (Sculpt Mode).

    • [C]-l - Select linked geometry, i.e. everything connected. In Object Mode, it brings up the Link/Transfer Data menu; this allows you to send objects from one scene to another, and other similar things.

    • [S]-l - In Pose Mode, add the current pose to the current Pose Library (as some kind of one frame "action" or something).

    • [A]-l - In Pose Mode, browse poses in the current Pose Library.

    • [S][A]-l - In Pose Mode, brings up a menu of poses in the Pose Library so you can delete one, i.e. pretty safe.

    • [S][C]-l - In Pose Mode, rename the current pose. Why not F2? Don’t know.

  • [m]

    • Move to collection (OM). Move grease pencil points to another layer (EM). Add marker (timeline). Mute a node in node editor to disable its function temporarily.

    • (EM) Merge menu. Collapse vertices into one. Important for creating a single vertex. Can be done "By Distance" which will collapse vertices that are very (you specify) close. Formerly [A]-m (remove doubles/duplicates), now it seems just m in Edit mode brings up the merge menu. Using "By Distance" will mostly do what the old one did.

    • [Shift+m] - Like plain [m] moves to a collection this is more like a copy to another collection — the original stays in the initial collection and shows up again in a different one.

    • [Ctrl+m] - Followed by the axis (e.g. "x" or "y", etc) will mirror the object immediately. No copy. Mirrors cameras too if you need reversed images. Rename marker (time line). Works on points in grease pencil edit mode.

    • [Ctrl+Shift+m] - Curve modifier menu (noise, limit clamping, etc) in graph editor. Quickly does selection mirror in edit mode; use the "extend" in the F9 control to add the other side, not just switch them over.

    • [Alt+m] - (EM) Split menu (by selection, faces by edges, faces&edges by vertices). Like Separate but keeps the geometry in the same object. Think of opening a box. Can also pull off bonus copies of edges if they’re selected alone. Clear box mask in sculpt mode.

  • [n]

    • Toggle "Properties Shelf" (right side) menus.

    • [S]-n - Recalculate normals. (If you need to see them, display normals with "Overlays" menu just over the Properties Shelf.) Recalculate handles in path EM.

    • [C]-n - New file.

    • [A]-n - Normals menu.

  • [o]

    • Proportional editing - note that the influence factor is controlled with the stupid mouse wheel.

  • [p]

    • Separate (EM), e.g. bones. Pinch (SculptMode). Set preview range (dope sheet, timeline).

    • [S]-p - In Shader/Node editor puts selected nodes in a frame.

    • [C]-p - Parent menu.

    • [A]-p - Clear preview range (dope sheet, timeline).

    • [C][A]-p - Auto-set preview range (dope sheet, timeline).

  • [q]

    • Quick favorites custom menu. Use RMB on menu actions to add them to the quick favorites menu.

    • [C]-q - Quit.

    • [C][A]-q - Toggle quad view. This means 4 viewports showing different ortho sides. Note that in quad view, for some reason Measureit annotations are invisible.

  • [r]

    • Rotate.

    • rr - Double rr goes to "trackball" mode.

    • [S]-r - Repeat last operation.

    • [C]-r - Loop cuts. Note that number of cuts is controlled with PgUp and PgDn. You can do partial cuts by hiding the faces (h) that form the boundary.

    • [A]-r - Reset rotation value of an object. Remove bone rotations in Pose Mode.

  • [s]

    • Scale. Set start frame to current position while in timeline.

    • [S]-s - Snap/cursor menu. Smooth (SculptMode). Save As from image editor.

    • [C]-s - Save file.

    • [S][A]-s - (EM) Turns an object into a sphere! (Does it need to be mainifold?)

    • [S][C][A]-s - Shear.

    • [A]-s - Resets scale? In Edit Mode it subjects selected geometry to the Shrink/Fatten tool which does pretty much what that sounds like. Remove bone scale changes in Pose Mode. In image editor saves the image. In movie tracking unhides the search area. In grease pencil edit mode, changes thickness of strokes where the points are selected. In path editing with handle selected, changes extrude width. In the Node/Shader Editor this swaps the input points of the current node.

  • [t]

    • Toggle main "3D View" tool shelf (left side) menus. Interpolation mode (dope sheet, graph editor).

    • [C]-t - Triangulate faces (inverse of [A]-j). In path editing with handle selected, changes tilt/twist of the curve. In the VSE, [C]-t changes the display from minutes:seconds+frames (02:15+00) to simple frames, matching every other default time display.

    • [S]-t - Flatten (Sculpt Mode). When lights are selected, they will follow the mouse cursor; works for multiple, select all lights by type to get all lights. See [e] in the dopesheet mode to see how this can be used to stretch/compress keyframe locations.

  • [u]

    • UV mapping menu (EM). In draw mode it brings up the Change Active Material menu. In grease pencil edit mode, turns on bezier curve editing.

  • [v]

    • Rip Vertices (EM?). In grease pencil edit mode, makes selected points their own segment, detaching from the original. Probably that’s the general functionality of this too. Graph editor handle type. Node editor the mouse buttons affect the nodes, but if you want to scale the background preview image, try v and [A]-v.

    • [S]-v - Slide vertices (EM?). This constrains movement along existing geometry.

    • [C]-v - Paste. Or maybe Vertex menu depending on context.

    • [A]-v - Rip Vertices and fill (EM?). In node editor scales (out?) background preview image.

    • [S][C]-v - Paste but with inverted sense somehow. (A good example. Another good one at 17:40 too. I’ve seen examples of this working in setting keyframes with a mirrored pose.

  • [w]

    • Change selection mode (box, brush (circle), freeform border). In draw mode it is rumored to bring up the context menu. I’ve had problems with this not working very similar to what is described here; using RMB for the context menu works just fine.

    • [S]-w - Reconstruction pie menu in Movie Clip Editor’s Tracking mode? Use with the bend tool to do the bending.

    • [C]-w - Edit face set (SculptMode).

  • [x]

    • Delete. Draw tool in sculpt mode. Swap colors (c.v. Gimp) in Image Editor and Texture Paint.

    • In operations, constrain to x axis or with shift constrain other two.

  • [y]

    • In operations, constrain to y axis or with shift constrain other two. In draw mode, brings up Change Active Layer menu, which includes a New Layer option.

  • [z]

    • z - View mode.

    • [C]-z - Undo.

    • [S][C]-z - Redo.

    • [A]-z - Toggle X-ray (solid, but transparent) mode.

    • [S][A]-z - Toggle display of all overlay helpers (grid, axes, 3d cursor, etc).

    • In operations, constrain to z axis or with shift constrain other two.

This video has a lot of tips about shortcuts for the node editor.

Numpad Number Keys On Numeric Keypad

Since you have all those stupid useless keys sitting there that you never use, you might as well use them, right? Thought the Blender devs. Well I had a similar thought which was to get a less idiotic keyboard that didn’t have all that extraneous cruft. But Blender is really keen on being the one piece of software that justifies stupid keyboards. But there are tenuous workarounds.

The numpad numbers tend to change the view.

  • np1 - Front

  • np2 - Down

  • np3 - Side

  • np4 - Left

  • np5 - Perspective/Orthographic

  • np6 - Right

  • np7 - Top

  • np8 - Up

  • np9 - Opposite

  • np0 - Camera

  • np/ - Isolate selected by zooming to it and hiding everything else.

  • np+ - Zoom in. [C]-np+ in grease pencil edit mode selects more points near the ones selected.

  • np- - Zoom out. [C]-np- in grease pencil edit mode selects fewer points near the ones selected.

Strange And Uncertain Features

I’m still trying to sort out these features but am noting them here so they don’t get completely forgotten.

  • [Space] - Brings up the "search for it" menu. Just type the thing you want and that option is found often with its proper key binding shown. Looks like this has all changed a lot with 2.8+.

  • [C]-n - Reload Start-Up File (object mode) OR make normals consistent (edit mode)

  • [C]-b - draw a box which, when switched to "render" mode will render just a subsection

  • [C]-LMB - In edit mode, extrudes a new vertex to the position of the mouse target. Can be used like repeatedly extruding but without the dragging.

  • [C]-MMB - Amazingly, this can scale menus. For example to make them more readable or make more of it fit.

  • [A]-C - Convert. This converts some fancy object like a metaball or a path or text to a mesh. Or covert the other way from mesh to curve.

Edit Mode Keys

  • p - Separate geometry from the same mesh into multiple objects. The "loose parts" option makes each air gapped structure its own object.

Object Mode Keys

  • L - Make local menu.

  • [F6] - Edit object properties. Useful for changing the number of segments of round objects when you don’t have a mouse wheel.

Here is a necessarily bewildering key map reference from blender.org.

Note that if something like [A]-LMB tries to move the entire application window because that is how the Window manager is set up, it’s worth the effort to change that over to the Not-Exactly-Super key. For me on Mate, I go to System → Preferences → Look and Feel → Windows → Behaviour → Movement Key. Fixing that helps a lot with Blender.

Node Editing Key Bindings + Node Wrangler

The second most common thing said in Blender videos (after "Apply transforms") is "enable Node Wrangler". Unfortunately this bumps up the complexity of key bindings by quite a bit. I have found the official documentation for Node Wrangler to be lacking (for example, this page doesn’t mention "m" to mute nodes). Part of the problem is that some of the bindings are being handled by Node Wrangler and some are just part of the Blender interface without the addon. Since I don’t care whose turf war is responsible for the functionality, I’ll just try to note the useful bindings I use when editing nodes.

  • [h] - Hide node. Toggles Collapse of the node’s compact form.

  • [m] - Mute node. Temporarily disables the node.

  • [s] - Scales selected nodes. The useful part of this is sx0 (or sy0) followed by [enter] to align.

  • [/] - Insert a reroute point. See [Ctrl][x].

  • [Ctrl][x] - Remove node preserving through connections. Good for getting rid of a reroute node you accidentally inserted.

  • [\] - Link active to selected.

  • [backspace] - Reset node to default settings.

  • [Alt][x] - Delete unused or muted nodes.

  • [Shift][p] - Put a frame around the selected nodes. Note that you can label the frame with [F2].

  • [Shift][s] - Substitutes node with a different one. (Oops. This is now deprecated. Look for a native version soon.)

  • [Shift][Ctl] LMB - Create a temporary short circuit to the final output for previewing operations.

    • [Ctrl][Shift][t] - With a (Principled BSDF, maybe others) shader selected, this will do a full "texture setup"; this will allow you to go select texture maps and it will put them all in the right boxes (an example).

Views

  • / - Toggle global/local view

  • [Home] - View all

  • [S]-F - Fly mode to AWSD controls (also E&Q for up/down, escape to exit)

  • . - View selected (Maybe only numpad)

  • 5 - Toggle Orthographic/Perspective

  • 1 - Front

  • [C]-1 - Back

  • 3 - Right

  • [C]-3 - Left

  • 7 - Top

  • [C]-7 - Bottom

  • 0 - Camera

  • [A]-h - Show hidden

  • H - Hide selected (also note [C]-LMB on the eye in the Outliner hierarchy)

  • [S]-H - Hide unselected

  • [A]-M - Merge - Makes two vertices the same. Or at least in the same place.

Note that the number keys are 10 number pad number keys; if the number keys want to change layers you may need to set "Emulate Numpad" in File → User Preferences → Input tab.

In Blender 2.8 some new useful interface features appear. Now you can go ahead and use the limited number keys (assuming you don’t have a stupid number pad) to select vertex, edge, or face editing. So how do you get quick access to viewports? Use the "`" backtick key to bring up a new style wheel menu from which you can use the mouse or numbers to select the view you want.

Simple Coloring

Often when modeling I would like something more helpful than everything showing up default gray. This can be done by going to the little down arrow box to the right of the "shading" menu which is located above the "n" menu on the upper right. Then you can choose "Random" and the objects will be colored in random different colors instead of default gray. That’s often enough for my simple needs.

Clipping Seems Stuck

Sometimes you’re looking at a model and it disappears and it becomes very difficult to find. This can happen when the clipping planes are set such that the whole model is cut out. To fix this, bring up the tool menu with "n" and then look for the "View" section and adjust the "Clip" values. Setting "Start" to 0 is a good way to try to get things back to visible.

Overlay Stuck Turned Off

Sometimes you want to see a grid and axes and all that good stuff. This one is related to the clipping problem because that may act up too. Maybe you only see the 3d cursor. You go to the overlays pull down and it all looks like it should be on. This tends to happen when I import something from someone else — especially if there was a conversion from some other kind of modeler. What may be going on is that the scale of the model is truly enormous. The solution can be as simple as changing the model’s units to something sane and scaling it to fit. Maybe press Home if you lose it when it shrinks back down. This can be quite mysterious to import, say, a flower and it’s the size of a 10 story building and the overlay is on and fine, just smaller than an aphid and practically invisible to the interface.

Scaling/Panning Seems Stuck

Sometimes it seems like you can’t zoom or pan the view. The trick here is to get the 3d-Editor window and go to ViewFrame Selected (formerly View Selected). This has a shortcut of . on the numpad (if you have one). That’s super frustrating so this tip can be very important.

Another solution that may be easier is to be in object mode (perhaps by pressing tab) and then press the "Home" key. This resets the view stuff. It would be nice to figure out what’s really going on there but persistent confusion may exist.

Background Images For Reference

A very common workflow technique is to freehand sculpt 3d assets on top of (or in front of, etc) a 2d reference image. These images don’t hang around for final rendering and are not part of the model per se. They are just available as helper guides to put things in roughly the right place so they look good with respect to reference material.

New 2.8 Way

The new system in 2.8+ is set up for a special kind of "empty" to contain an image. These can be created by inserting [S]-a "Image". This gives you a choice of "Reference" or "background". It seems that background images will show other objects in front of them while reference images may still be persistently visible. This has some subtleties explained here.

Here are the three kinds of image objects.

  • Reference - Can be at any orientation. It is basically an empty with a scaled image instead of axes or arrow or some other marker. Like all empties, it doesn’t render.

  • Background - Very few and subtle differences from the reference image empty. The main one I found is that it won’t render the back side of the rectangle while a reference empty will. Also, solid mesh objects will be shown over the background image even if that empty is closer to the camera or viewer — this puts the image always in the background. A -1 scale will flip the images.

  • Images As Planes - Can do anything a plane can do, but with your image slapped on it as a texture automatically. This means that this plane with its image is renderable where the other types are not visible at render.

In the Empty’s settings you can specify a check box if you want it to be visible in Perspective, Orthographic, both, or neither. Orthographic can be helpful if you have a front photo, a top photo, and a side photo and want the others to go away when you switch views.

If two empty images share the same distance from the viewer and are superimposed, the one that is rendered is the one closest overall (in perspective). You can see this by putting two images in the same place and g sliding one up half way. Then change the view position to look from high, then low and the order will change.

Background HDRI Images

Sometimes you try to render something shiny (or anything actually) and it looks very computery because there’s nothing normal out in the wider world reflecting naturally off the object. This is especially problematic with isolated objects which look pretty unnatural just floating in space. To help cure this, you need to tell Blender’s "world" about what sort of ambient background is out there.

A decent place to start looking for environment textures is this site (aka this URL).

You could do worse than to start at Poly Haven and go to their HDRI section. Pick one that is somewhat like the mood of the scene you’re going for and download it.

Note that this file can be a .exr or .hdr file. Best to stick to OpenEXR which was created by ILM and seems wholesome. Blender created the OpenEXR Multilayer format which is an extension of that which is slightly less universally supported and probably not necessary if you don’t know you need it. All of these seem compressed but lossless.

To get Blender to start using this HDRI as a background: Go to the World Properties tab. Click Color and under Texture choose Environment Texture. This will then provide a file box where you can specify the path to your file.

If you want to adjust how your image is sitting on the background open up the Shader Node Editor. Choose the World pull down (next to the editor type icon). Then add two nodes: a texture coordinate node (from Input section) and a mapping node (from Vector section). Connect the Texture Coordinate’s Generated output into the vector input of the mapping Node. Connect the mapping node’s vector output to the hdr image node’s vector input. By playing with things in the mapping node, especially the Z rotation, you can get what you need. This process is explained in this video.

HDRI stands for High Dynamic Range Image and is often made of images taken with a camera at multiple exposure settings so it can provide more information about what the scene is doing in different conditions (no regions washed out or too dark). Here are some good examples.

This kind of image can be assembled with Gimp — here’s a decent beginner guide. And a more in depth guide.

It should be possible to make the dynamic range "high" with Gimp. To make stitched panoramas, you can look into Hugin. Compiling it is no fun but it seems to be available with apt install hugin.

In a pinch, there are some low res environment images that come with Blender that are used in the Material Preview mode. These can be found in your installation at. /${MYPATH}/blender-3.5.0/3.5/datafiles/studiolights/world

Old Way - Pre 2.8

Make sure the "Properties Shelf" is on with "n". Look for "Background Images" down near the bottom — you might have to scroll. Click "Add Image". Select your image from the file system. The rest is pretty self-explanatory once you find it. If nothing shows up, you may not be aligned to the correct view. Pressing 7 will show a reference image set to display on "top" views.

If that still doesn’t work, you are probably in "Perspective" mode even though you pressed 7 and it sure doesn’t look like it. Double check that it does not say "Top Persp" in the top left corner of the modeling window — it should say "Top Ortho". To toggle, make sure the mouse is hovering over the modeling window and press 5.

Remember that the control settings can be specific to a window. For example, you may have a top down view that is blocked by your model. You set the top down view property to "front" and nothing happens. But if you set that property in the side view, it will be thinking that’s what you want only in that window if you change to top view. To have the window showing your top view take those properties, you have to go there and press "n" to open the properties and set it there.

Origin And 3D-Cursor

I find the distinction here can be tricky to get used to.

The origin is the tri-colored unit vectors with a white circle around it. The 3d-Cursor is a red/white circle crosshairs. Position the 3D-Cursor by LMB; note that it should stick to the things (e.g. faces) sensibly. Note that this is less obvious in wireframe mode. When in "Object Mode" in the tool shelf, there can sometimes be an "Edit" submenu; in that can be found a "Set Origin" pull down. This includes "Geometry to Origin" and "Origin to Geometry". Also "Origin to 3D Cursor" and "Origin to Center of Mass".

  • [S][C][A]-c - Bring up menu for origin management (pre 2.8 - see below)

  • . - Move origin to 3D-Cursor. In 2.8+ this sets the pivot point in a handy way.

  • [S]-C - center view on something pleasant and move the 3-d cursor to origin

  • [S]-S - Open snap menu, handy for putting the 3d cursor to selected or grid, etc. One of the best techniques for positioning the cursor is to use [S]-S and then "Cursor To Selected" which will put it perfectly in the middle of the face.

In 2.8+ to reposition the origin of your objects, select the object in object mode, click the "Object" button on the bottom menu, choose "Set Origin", and pick the thing you need such as "Origin to Geometry". Or in Object Context Menu (RMB in OM) look for "Set Origin" - also easy.

3d Cursor

In old Blender (pre 2.8), when you clicked with the most natural LMB action, the 3d cursor was placed. Although this only is true now if in the cursor tool mode ([S]-space space), clearly this action is thought to be important. What the hell is it good for?

  • It is where new objects will show up.

  • The "origin" can be moved to it.

  • It can be where things rotate if you set — pivot point (button right of display mode button) → 3d Cursor

  • Optionally where the view rotates — "n" menu → "View" Section → "Lock To Cursor" checked — This is worth doing!

To set the 3d cursor’s location more specifically than random LMB madness, there are a couple of different options.

  • If you want a rough position, the LMB will work, but in v2.8+ make sure you’re in the Cursor mode and not, say, the Select Box mode.

  • In the "n" menu under the "View" tab, the "3D Cursor" section has a form box for explicit entry of X,Y,Z location.

  • Often you want the cursor placed with respect to some geometry. [S]-s brings up the cursor placement wheel menu. Doing [S]-s and then 2 will put the cursor on the selected geometry. Note that by doing this and checking the "n" menu’s coordinates (see previous item) you can take measurements and find out locations. Another example to clarify this useful use case is if you want the cursor at the "endpoint" of some other edge in the model.

  • Note that when you get the cursor on some existing geometry, you can go to the View→3d Cursor section of the "n" menu and put math operations in the location box. So if you want to move the cursor up from a known endpoint you can [S]-s,2 select the endpoint, then go to the Z box for 3d cursor location and put "+2.5" after whatever is there to move the cursor up 2.5 units.

Some technical details:

C.scene.cursor_location             # Query
C.scene.cursor_location= (0,0,1)    # Set
C.scene.cursor_location.x += .1     # Increment X in a controlled way

To see where this comes in handy, see measuring distances.

Measuring Distances

What if you simply want to know the distance between two points in your model? Not so easy at all with Blender! (Well, at least according to this very bad answer to the question.)

It looks like in 2.8, there is now a prominent feature to measure stuff! Yay! How this got overlooked until now is a complete mystery.

To use it check out [S]-space + m. (This was formerly [A]-space I believe.) LMB hold on the first point to measure. Then drag to the second point. If you mess it up, you can pick up the ends and reposition them. You can also use [C] to constrain the endpoints of the measurement to snap points.

If these little measurement lines hang around as dashed lines, it can be tricky to delete them. Click on the ends of the unwanted phantom measurement ruler and then del or x.

MeasureIt Tools

There is an included add-on that is pretty nice which creates dimension line geometry. With the new native "Measure" tools, this add-on (and other similar ones) become a lot less important. However, if you need to actively communicate dimensions as in a shop print, this add-on is still excellent.

Generally to use it, you simply select two vertices or 1 edge in Edit Mode and click the "Segment" button. Sometimes tweaking the position is required. See troubleshooting below.

Troubleshooting MeasureIt

Don’t see MeasureIt in the "View" tab of the "n" menu? Maybe the addon is not enabled. Look at Edit → Preferences → Add-ons and check the box for "3D View: MeasureIt".

An annoying flaw that has frustrated me is the addon’s visibility is off by default!! Note the "Show" button — if you click that it will make the measurements from the addon visible. Clicking it again will "Hide" the measurements. It’s frustrating because you assume the default to show the stuff if you’ve bothered to use it, but that’s not how it works.

Note that for some annoying reason MeasureIt annotations are invisible in quad view! This is problematic because if you’re doing something technical that could make use of MeasureIt, you’re also more likely to be using quad view. As a reminder for how to turn it off, try [C][A]-q to toggle and ` for individual view selection.

You can see the dimension lines but they are all crazy and not at all properly automatically lined up with the mesh or axes or anything discernible. If you expand the settings (the little gear icon) on one of the measurements in the "Items" list, it will have a checkbox for "Automatic Position". Obviously you want automatic positioning, so turn OFF automatic position! I found that to cure alignment issues. Maybe then you’ll need to play around with "Change Orientation In __ Axis" to really put the aligned dimension where it should be.

If you’re trying to make a final render with the MeasureIt dimension lines included, you will be disappointed. The way it works is that the measurement lines are created in their own separate transparent image. One thing to keep in mind is to export RGB*A* (not RGB), especially if you’re just looking at the measureit_output in the Image Editor (check the "Browse Image" thingy to the left of the image name to see if there’s a second image from MeasureIt hiding in there).

Importing the scene render’s final image and the MeasureIt overlay image into layers in Gimp, you can combine them for what you’re after. Or use ImageMagic

composite measureit_output.png coolpart.png coolpart_w_dims.png

Keeping the undimensioned one can maybe be used nicely to toggle the dimensions on and off with JavaScript.

DIY Python Measurments

My technique is as follows.

  • Select the object of the first point.

  • Tab to be in edit mode.

  • "a" until selection is clear.

  • Make sure vertex mode is on.

  • Select the first point.

  • [S]-s to do a "Cursor to Selected".

  • In a Python console type this: p1= C.scene.cursor_location.copy()

  • Select the second point in a similar way.

  • [S]-s to do a "Cursor to Selected" again.

  • In a Python console type this: p2= C.scene.cursor_location.copy()

  • Then: print(p2-p1)

Also in Edit mode, under "Mesh Display" there is checkbox for showing edge "Length" info. But that has its limitations too if the points are on different objects. Note that this can show comically incorrect values! The problem is (may be) that an object was scaled in object mode and the transformation was not "applied". Try [C]-a to apply transforms. I was able to go to object mode, select all with a, and then [C]-a and then apply rotation and scale. Details about this problem.

Volume Measurements

Sometimes you want to design something complicated and know how much concrete or 3d printing pixie dust the thing will require. Blender has an extension called Mesh:3D Printing Toolbox which can do this. Just go to user preferences, "Add-ons" and check it. Then you’ll get a tab for that. You can then click volume or area for a selected thing and go to the bottom of that panel to see the results.

Align

Most people align things by just eyeballing it; this is wrong. But Blender doesn’t always make it easy to do perfectly. And perfectly is the correct way!

Align Objects

The object alignment is pretty easy. There is a whole Align addon that ships with Blender. It can be useful and I think mostly obvious how to use it. But you can also just copy transform coordinates if you only have a couple.

Align Mesh

Let’s say you get a mesh from some random source and it has had its transforms applied but they’re not quite right. Maybe you have a ship scene where the ship is rocking in the rough sea but you want to model new things parallel to the deck — how do you make this model’s mesh change so that all the deck points are in the same world Z? You can’t even use "Local" coordinates because they’re parallel with the ocean and not the deck.

The trick is using the Align View tool to get a foothold. Here are the steps.

  • Go to Edit mode and in face select mode, select a face that should be in one of the major planes, i.e. not misaligned.

  • Use the View → Align View → Align View to Active → Top menu item to get a view that’s looking straight down on this misaligned face.

  • [Shift+s,2] to put the cursor on that face.

  • Tab back to object mode and [Shift+a,m,p] to add a plane.

  • The plane will still be out of alignment! But if you go to the F9 "Add Plane" post hoc menu you’ll find an Align pull down. Set that to View. Now this plane should be aligned with the geometry on your original object that you want to align.

  • There is probably some way to copy transforms directly (Align Tools addon?) but I won’t remember that as easily as simply parenting the target object to the alignment plane (select them in that order and [Ctrl+p]).

  • Then select only the plane and zero out its rotation angles. The target object should come along and now be aligned.

  • You must unparent ([Alt+p]) the target object (keeping transforms of course) before deleting the helper plane.

The target object should be properly aligned now!

Align Mesh

Layers

Those little 2x5 grids in the menu bars are layer slots. To change the layer of an object, select it and press "m". This brings up a layer grid to select where you want it. To view multiple layers you can click on the boxes in the grid using shift for multiple layers.

Grid

The basic grid seems like the kind of thing that should be controlled in user preferences (like Inkscape) but it is not. Turn on the "n" menu and look for the "Display" section. There is a selection for which axis you want the "Grid Floor" to be (usually Z). Then adjust the scale and size.

Objects

Objects can be dragged around and placed in different hierarchical arrangements in the "Outliner". I’ve had this sometimes get stuck and it’s pretty strange, but reloading can cure it.

Having a good object hierarchy can make operations easier since it allows finer control of hiding or excluding from rendering.

To create a different object that is not part of an existing mesh, use [S]-d for "duplicate" and then hit the "p" key which will bring up the "separate" menu allowing the "Selected" object (or "All Loose Parts") to be made into their own objects.

The complimentary operation is to join objects. For example, if you input some reference lines on a boat hull and each is its own object, you can’t put faces on them. You must join them to the same object first. In object mode, select both of the objects and pres [C]-j and they will be part of the same mesh.

Parenting

Parenting is a very important concept that allows objects to be organized. I used it less than I should have for a long time because the metaphor was confusing to me — children are not normally the first to appear in a family followed by the parent. But in Blender, selecting objects to make a parent child relationship, that is exactly how it is done.

For me a much better metaphor that conveniently starts with the letter p, makes the selection order and one to many aspects seem natural is to mentally replace the Blender verb "parent" with "piggyback". Or "piggyback on to". As an added bonus, "piggyback" implies a physical connection that is usually relevant in so-called parented objects. While it’s true that multiple passengers don’t usually ride the piggy at the same time, it’s totally plausible if the piggy is strong enough. (Certainly more plausible than a child having only one biological parent - ahem.) This dictionary definition of the verb piggyback pretty much exactly sums up what Blender parenting is: "to set up or cause to function in conjunction with something larger, more important, or already in existence or operation".

Each object can be thought of having one single slot for a parent. Just like if you move a file into a directory, you make that directory a parent of the file. There is an analogy to mv fileA fileB fileC parentdir since the last item in the arguments becomes the parent for all the other items. Of course the better analogy would involve a directory and subdirectories since all items can themselves be a parent. To get the terminology straight, imagine that files/subdirs are parented to the directories that contain them.

It is possible to assign a single parent object (the "carrier") to multiple child objects (the "riders"). As long as the parent object is selected last (called the "active object", glowing the brightest orange) the parent operation will parent each of the other selected objects to it.

One of the options in the [ctl+p] parenting menu is the "Object (Keep Transform)" option. This is subtle and often optional for most operations. The basic idea of it is that if an object is transformed because of its parent, when you try to parent it to a different object the question is if it will revert back to the basic object properties of the child first or if it will get a kind of "apply" to keep the previous parent’s influence and only then follow the new parent. An example would be if two characters were going to hand each other an object. The object would be parented to A’s hand and at the moment of hand off, you want it parented to B’s hand. What you probably don’t want is that when parenting to B, the object returns to where A’s hand was at the start of the scene where the object was originally modeled (and its transforms applied) and originally parented to A. That’s where it would go if it were no longer under A’s influence, but with Keep Transform, you can accept it’s new position/rotation/scale as the basis for this child object.

If you want to temporarily suspend parenting effects, with the parented object selected you can go to the "Active Tool and Workspace settings" tab (which looks like a screwdriver and a wrench) and choose "Options"; then one of the "Transform" options will be to "Affect Only… Parents". Check that and the parent can be adjusted with the children staying fixed.

For bones, the general strategy is that distal bones piggyback on the proximal bones by default assuming the bones were extruded distally. For example, the tib-fib (shin bone) piggybacks on or is parented to the femur (thigh bone). There the femur is the parent, selected last during the parenting operation. The idea of "bone parenting" as a specific parent mode is when you want some other object (e.g. a robot’s form/shape mesh or an animal’s skin) to be piggybacked on a specific bone as opposed to the whole armature.

Simple Things Not So Simple

Simple Points And Lines

With Blender it is strangely easier to model a cow than a simple Euclidean line. Seriously, just getting a simple line is strangely challenging. I’m not the only one who ran into this (here and here).

As far as I can tell, you must create a complicated entity (like a plane) and remove vertices until it is as simple as you like. There may be a better way to get started, but I don’t know it.

Use [A]-m to merge vertices to a single point!

Once you have chopped something down to a line (or even a single vertex), you can extend that line to new segments by Ctrl-LMB; this will extend your line sequence to where the live pointer is (not the origin thing).

Another similar way is to select the point to "extend" and press e (for "extend") and then you can drag out a new line segment. This is less precise in some ways because it is not right under the mouse position. However, this technique can be very helpful to press e and then something like x then + then 3 which extends the line sequence to the positive X by 3 units. Not putting the sign can result in absolute coordinates. Press Enter when you’re done.

It looks like there is now an add-on that addresses this nonsense. Go to Preferences → Add-ons and search for Add Mesh: Extra Objects. This will give you many things like Mesh → Single Vert → Add Single Vert. Also Curve → Line (though it may be better to extrude that "Single Vert"). There are a lot more too. Worth activating!

Trim Mesh Vertices

AutoCAD had a command trim and another called extend and they were incredibly useful. Blender is weirdly deficient here. However, if you’re experienced enough, there is a shrewd idiomatic Blender way. Basically you need to scale the vertices in Edit Mode to zero. Mostly this is sensible when your trimming is aligned to the coordinate system and then you can select an axis at a time. For example, to make a bunch of points with random heights all have the same Z value (e.g. flatten a mountain) just select them and [s,z,0].

Tubes, Pipes, Cables, Complex 3d Shapes

A very common task I have when modeling is something like a racing bicycle’s handlebars or its brake cables or a curved railing or drain p-trap, etc. Any kind of free-form piping or tubing. Of course there is a way to do this in Blender and luckily it’s not too horrifically difficult.

Round

If you just need a normal round pipe or cable, it seems this is now easier than ever. Just create the pipe’s path with Add → Curve → Bezier. Then go to "Object Data Properties" whose icon should look like a curve and is right above the Materials icon. Open up the "Bevel" and choose "Round". You can adjust the "Depth" setting to change the thickness of the pipe; in theory this is the "radius of the bevel geometry" but be careful, because in practice I couldn’t quite get that to make sense.

Custom Profile

This video explains the classic technique. Here’s another video that may be even easier. They are similar in concept.

Basically, you need to create two objects.

  • A "Path" under Add → Curve → Path which will guide the pipe’s trajectory.

  • A "Nurbs Circle" under Add → Curve → Nurbs Circle. This will be the pipe’s profile. Note that this is not a regular mesh circle! I’m assuming it can be any kind of path since a circle — unless you scale it into an ellipse or something — is better achieved with the "round" technique described above.

Select the pipe path’s properties in the "Data" section of the properties menu (between the Materials and Modifiers). Go down to "Bevel Object" and choose your profile Nurbs Circle.

From there simply select the path and go to edit mode and manipulate the path’s spline.

To finish up, you can select the pipe and RMB to get the object menu and choose "Convert to Mesh". Now you can delete the circle.

Dividing A Curve Or Roughly Measure Tube Length

Imagine modeling a bathroom sink with a supply line you know from real world measurements is 18" long. How would that look when you hook up the shut-off valve to the faucet supply? I do not know how to specify a curve with a fixed length, but you can do trial an error until you get it the right length.

How do you measure the length? This video provides some hints, but it’s a bit too quick and maybe not 2.8 friendly. The best way I could deploy was to create a small temporary object to use as a measuring device. Position it so that it is at the beginning of the pipe. Then with that new cube object selected in object mode, add a "Curve" modifier. Select the path of the pipe you’re trying to measure. Then slide the position of the cube along its new special X axis which will be along the curve. By sliding the cube object up and down the curve by adjusting its X position you can now get a rough start and end point for your pipe. Subtract and that’s your length. Roughly. Keep reading to understand this concept better with a different but relevant objective.

Sometimes you have an elegant shape that needs to be regularly spaced along its length. This is related to a way to measure curves. For making a ruler with straight edges, you can used the subdivide feature. But what if the thing is not straight. For example, perhaps you have a winding road that needs regularly spaced striped lane lines. Or perhaps you’re doing a stitch and glue boat and you need the final segments/triangles that comprise the curve of the hull pieces to have exactly the same spacing.

  • Start with a curve object that is the object of interest. In this example, the road.

  • Check setting for Curve → Properties → Active Spline → Endpoint. This allows the end of the curve to be handled precisely and not dangle in an unknown place.

  • Make a mesh of the road stripe — basically a single horizontal edge aligned so that it’s pointing to positive X. This mesh will be arrayed in the X direction by exactly its X width until it fills the length of the curve (and maybe some remainder’s overhang). Note it will not follow the curve without another modifier; it will just go straight and the length of the curve is just to limit the number of iterations.

  • Go into edit mode, select the left vertex of the stripe mesh and put the 3d cursor there with "[S]-s 2". Then switch out of edit mode with tab and "RMB o t" to Set Origin → Origin To 3D Cursor.

  • Move the stripe mesh with "g" so that its left end is on the end of the left road curve. Use [C] to snap to make sure it is exact. It may be smart to put the origin of the stripe mesh at the left end of the stripe to prevent offsets during the array.

  • Double check in the Item panel of the n menu that the Rotation and Scale for the road curve and the stripe mesh match.

  • Select the stripe mesh and add an Array modifier.

    • For the Fit Type, choose the Fit Curve method and select the road curve for the curve.

    • Consider the Merge option if the final divided segment is continuous. For road stripes, maybe don’t merge because you will want to delete every other segment. You could also do a Relative Offset → Factor of 2 for a striped road. Or a Constant Offset and choose the exact spacing you require.

  • With the stripe mesh still active, add a second modifier, a Curve modifier. Again choose the road curve for the curve.

Mapping Complex 2d Shapes

The most common problem I have to solve with Blender is not sculpting a cube into a bunny but rather taking some engineered product from the real world and modeling it. Real world things usually aren’t so organically 3d and there are aspects of them that the designer simplified. For example, let’s say I’m trying to model an intersection for a vehicle simulation, I can pretty much assume that the designer of the road first thought about it in two dimensions before moving onto worry about camber and slopes. My workflow is very similar.

I often take a reference photo as described and what I want to do is just trace some geometry around important features to get started. But Blender, as far as I can tell, is terrible at this simple task. Well, it certainly does not emphasize it as important workflow, but, like all things in Blender, it is possible.

In object mode (tab to toggle) use the menu Add → Mesh → Plane and put a plane near where the geometry in question starts on the photo. Then "gZ" to move (in XY only) the plane square so that one of the points is where one of your model’s points should be. Go to edit mode (tab) and get rid of half the plane with "ab" (select all, bounding box) and grab two of the most incorrect points with a box. Then "x" to delete the vertices. Oh, another way is to have them all selected and do [A]-m and "Merge at Center" which will leave you with a single point at the center ready for action.

Now you have a line with one of the points correctly placed. Still in edit mode, select the incorrect point and "gZ" it to a correct location. With it still selected, do "eZ" to extend that point into a new line/point and put that on the next feature point of your model. Continue repeating this extending until you have the whole thing outlined.

Note that it’s just weird lines unless you can close them back up. For the final point that should be back where you started, you can put the point anywhere near. Then use "ab" and select both the beginning and ending point and do "[A]-m" to merge them. You may need to reset their location but that should be it.

Now with all the points of this object selected you can "ez" to extend in the Z axis and give that some thickness. You can use "f" to but a top and bottom on it. Now you have a complex shape that you can sculpt.

Insert A Vertex

My main modeling strategy is to make long chains of line segments that fit the geometry in all views and then later join and face them. This means that I often am making long line extensions that turn out to need another vertex when the curve from another axis is taken into account. How can a vertex be added between two existing ones? In edit mode I select the two bounding vertices and use the edge loop feature — press [C]-r and then enter. The enter is important or it won’t take.

One task that comes up a lot is that a face needs more detail and it needs to be split into two faces. Imagine you want to transform the stock cube into a little house with a peaked roof. You basically want the top face to be two faces each 1x.5. Then you can just "g" position that edge up in the Z a bit and you have your house. But how to make that split? First select the two opposite edges that will be getting the new roof peak edge. Look in the "tools" menu for the "Subdivide" button. Click that and make sure you press enter to confirm the settings (which are settable in the lower left corner, in the tools panel).

Reorganizing Mesh Geometry Into Desired Objects

Often with the frenzy of extrude operations needed to get accurate geometry, you wind up with some silly organization. For example, I might extrude along a wall and then up a door edge, over the door’s width, back down the door height and then continue along the wall. This would be based on measurements of the room and door. But at the end of it all, I probably want an object representing the floor or walls and a separate object representing the door. How does one break out the door object?

My best guess for doing this which seems to work is to go into edit mode with the object selected that has the geometry you want to liberate. Select just that subset you want to make a new object out of. Then [S]-d seems to reasonably create in-mesh duplicates. This is also a good tip for doing things like repeating framing studs or some other repetitive geometry that might reasonably all be in the same mesh.

Once you have the geometry to create a new object from selected, press "p" which brings up the "Separate" menu. Since your geometry of interest is selected, choose "By Selection".

If you’re trying to hop some geometry from one object to the other, you can select the new object and the target object and use [C]-j to join the objects.

Text

I never really use Blender text because it seems so weird. But I’m getting over that and it’s actually pretty well behaved. First of all a new font format has sneaked up on me: somefont.woff. This stands for Web Open Font Format. Sounds wholesome and Blender chose to use it, so it’s good enough for me. I’ve joked that you can spot a Blender work a mile away just by the distinctive default font. How do you get something different? It’s not too hard. I have created a blender directory in /usr/share/fonts where Linux (Debian anyway) stores its fonts. From there I link to my collection with my Blender stuff.

$ ls -l /usr/share/fonts/blender/
lrwxrwxrwx 1 root root 54 Apr 30 22:17 MetalGothic.woff
   -> /home/xed/data/blender/EXTERNAL/fonts/MetalGothic.woff

It may also be possible to have Blender not look for fonts in the system font location. This can be changed with Preferences → File Paths → Data → Fonts.

When Text objects are in Edit mode, the text can be changed. You can go to the Object menu and Convert them to meshes. Once you’ve done that, you can’t edit the text content, but you can do the normal 3d graphics stuff.

Loop Cuts

Loop cuts are very powerful and in dense meshes they can be quite intuitive. But in simple sparse geometry, they can get quite perplexing. The general idea is to go to edit mode and then edge select mode. From there use [C]-r to start the loop cut process; move the mouse around and little yellow guides should show you where your cut will happen.

You can use the page up and page down keys to adjust the number of cuts. (A mouse wheel, the normal clumsy way to do this, is not needed.)

When you’re happy with it, press enter. Now you should be in sliding mode where you can slide your loop cuts along the loops. If you did sliding by accident, you can use the RMB to put them back in the middle, without sliding. If you don’t want sliding at all, just press enter twice when accepting the cut.

Loop cuts provide a bizarre but serviceable trick for accurate technical modeling. A very common requirement in technical modeling is the need for a new feature a known distance away from some other feature. Imagine a face that represents a wall in a house - if you want to add a window, you need to make some cuts in that face where the window actually is. You can measure where the window is in the house, say, 40" from the corner of the room. To model this, do a loop cut. You get the orientation right, hit enter, and during the sliding phase, you slide this new window edge all the way into the corner of the room. Hit enter to accept this. That seems useless, but now you have the correct geometry to use "g" on the selected geometry, constrain it to the appropriate axis (with "x" or "y" probably) and type the exact value to put it in the right place ("40"). Now you have fresh geometry exactly where it should be based on numerical measurements.

Sometimes the loop cut extends farther than you’d like and you want to rejoin some cut faces. Use the face mode, select the faces to join and from the RMB menu use "Dissolve Faces".

This is a very strange workflow but doable once you get used to it.

Knife Tool Cut

  • [Space] - finishes knife tool operation

  • S - suppress the auto snapping.

  • [C] - snap to midpoints

Triangulate

Do you love triangles? I do. But Blender likes to hide the triangular truth of computer science from you. You can go into face mode and select non triangular faces and press [C]-t. This will triangulate them.

Modifiers

When I first was learning Blender I couldn’t quite understand exactly what the "modifiers" were doing or when or to what. For example, some modeling software keeps lists of boolean constructions instead of actual geometry. The resultant geometry is just calculated on the fly when needed. So are Blender modifiers like this? Not exactly.

Modifiers are a way to truly modify some mesh geometry. The confusion arises because they can be set up and sometimes previewed before the target geometry is actually modified. They can be stacked so that two modifiers are set to modify some geometry and if you like how it’s going then hit Apply and the actual modification will be done. Generally you’ll want to start at the top modifier and keep hitting "Apply" as the rest bubble up to the top to do multiple modifiers.

Modifiers show up under the object in the hierarchy while they’re being adjusted. They disappear when you "Apply" them.

There are a ton of these modifiers — so many that it’s hard to keep track of them. A very good resource is the Modifier Encyclopedia which contains descriptions and examples.

Mesh Cache

Modify Category
  • Mesh Cache - Reuse rigged characters for crowds and exporting.

  • UV Project - Adjust a texture by orienting an empty object.

  • UV Warp - Like UV Project but can be animated for scrolling and other effects.

  • Vertex Weight Edit - Animate changes to vertex weighting.

  • Vertex Weight Mix - Vertex weighting from variable sources.

  • Vertex Weight Proximity - Changes weighting based on distance. E.g. something melting near a heat source.

Generate Category
  • Array

  • Bevel

  • Boolean

  • Build

  • Decimate

  • Edge Split - Useful for smoothing except where you want sharp edges. Puts a sharp edge on angles sharper than a certain threshold or thus marked by the sharp attribute (which seems to be a thing). Also handy for normal fixing when exporting to Unity. Details.

  • Mask - a way to turn off certain vertex groups. Also for hiding meshes not related to the bone you’re currently rigging. Details.

  • Mirror

  • Multiresolution - for L.o.D. model sets

  • Remesh - Turns meshes into quads.

  • Screw

  • Skin

  • Solidify

  • Subdivision Surface

  • Triangulate

  • Wireframe

Deform Category
  • Armature

  • Cast - morphing animations

  • Curve

  • Displace - deforms based on a texture map. Interesting for topo maps.

  • Hook - pull other objects' vertices when interacting in animations.

  • Laplacian Smooth - denoise rough textures but keeps the overall shape.

  • Laplacian Deform - keep relationships intact while deforming the model. Similar to using bones, but more rubbery.

  • Lattice - Deform mesh to conform to a cage.

  • Mesh Deform - Deform a mesh using another mesh as a control.

  • Shrinkwrap - Conform a spline to a mesh; good for a road on terrain. Put the modifier on the thing to project (e.g. the road) and select the object onto which it should be projected as target. Note that the quality can be very poor for very low poly models - this seems to like lots of geometry to work with.

  • Simple Deform - Twist, bend, stretch, taper in ways that can be animated.

  • Smooth - Smoother than Laplacian Smooth but letting the geometry get lower fidelity.

  • Warp - Animate part of a mesh deforming to a different location.

  • Wave - Things like a flag blowing. Looks like simple trig function.

Simulate Category
  • Cloth - Complex cloth physics.

  • Collision - Objects falling and behaving correctly.

  • Dynamic Paint - Objects leaving trails behind as they do physics (as in Collision).

  • Explode - Like build but with animations to make the parts fly away.

  • Fluid Simulation - Useful for the quantities of liquid involved in a glass spilling.

  • Ocean - A bonkers accurate simulation of open water waves, with wind and chop, etc.

  • Particle Instance - Change mesh of particles.

  • Particle System - Grass, hair, all kinds of fancy stuff.

  • Smoke - puffs of smoke.

  • Soft Body - like a body with no bones. Or a water balloon.

Array

Repeats a line of objects. If you want a higher dimensional array, stack multiple modifiers.

You can use an empty (which is an object type) object and use that in the "Object Offset" field to control the offset, rotation, and scale of each object with respect to the previous. This sets you up to have interesting tentacle-like constructions (e.g. scaled and rotated).

Interestingly the array modifier can also be used to get the effect of tube bending (bicycle handlebars, corkscrews, etc.). This is not exactly easy, but if you set up a hollow profile of your tube and the array it with a relative offset, you can then add another modifier, a curve modifier which can constrain the path of the arrayed objects. You can add a bezier curve to guide the path and then link that as the object of the curve modifier. The same video has a very quick demonstration of this towards the end.

Polar Arrays

Unfortunately just accomplishing a simple polar array (e.g. bolt circle) seems absurdly tricky. But it is possible. The second half of this video and this one covers this trick, but it isn’t super obvious or easy to use. Basically you need to add an "empty" type object to the center of your polar array; disable relative offset and enable object offset. You also must make sure your mesh’s geometry center is in the polar center. Choose the number of items you want. Nothing seems to happen until you rotate the object around the geometry center which should also be where the empty is. You have to put in the rotational angle manually for a single interval. For example, if you want and specified 6 objects and your object is at the 3d cursor you can "rz60" and all 6 should appear and correctly rotated in place. Note that before you apply this modifier, you should [C]-a to apply transformations; this will prevent a weird spiraling out of control look.

If you fuss with it enough, it can theoretically be used for spiral staircases and helices like DNA, etc. It might be best to just add a circle and change the number of vertices and then place things to them using the 3d cursor. Not fun or easy, but at least no heroic mental gymnastics.

Another completely different technique is quite tolerable when you have a tame number of items in your bolt circle. If you have 6 items and know that the angle between them is 60 degrees (or 4 for 90, or 5 for 72, or 8 for 45, etc) you can do the following. Make sure that your pivot point is on the 3d cursor. Then select the object you want replicated. Duplicate it with [S]-d (or [A]-d if a shallow copy makes sense) and instead of hitting enter or moving the new object, hit r to rotate. Assuming your 3d cursor is in the middle of the bolt circle type in that angle. That’s one copy done. Then hit [S]-r to repeat the last action and you can fill out the others very quickly and easily with a minimum of cognitive fuss.

Note that I’ve found it helpful to create geometry from the faces of cylinders. A very interesting trick is "Deselect checkered" which can leave every other face selected. From there you can build them out as if they were individually created and placed with a polar array.

Bevel

This basically can break sharp corners with what machinists (and AutoCAD) call chamfers (1 intermediate surface per corner) or fillets (rounded corners, in Blender that means as many subdivisions as you’d like to allocate).

While there is a bevel modifier, there is also a non-modifier operation bevel tool. The important key binding is [ctrl+b]. It can be tricky to use if you do not have a scroll wheel but it is possible.

My typical requirement is to generate a machinist chamfer or fillet of a known exact distance from the theoretical sharp intersection of the two lines (or faces — i.e. lines when in profile) to where the bevel begins. For example if I want to model an ice rink that has a 8.5m radius fillet between two corner wall surfaces, the distance from the theoretical sharp corner to where the rounded fillet begins (or ends if you think of it that way; where the fillet meets the flat surface) is 8.5m. How does one generate that (without a scroll wheel)?

  • Tab into Edit Mode.

  • Select the vertices or edges you want to bevel. [a] for all of them if that makes sense. They may already be all selected by default.

  • [ctrl+b] to enter interactive bevel mode.

  • Look in the lower left information line at the bottom of the interface. There you should see the bevel tool key bindings.

  • If the "Mode" is not "Offset" keep pressing [m] until it is. What’s interesting is that there is a mode called "absolute" which may be even more correct, but it does not show up when you cycle through the modes using the [m] key in bevel mode.

  • For 2d profile modeling you need the "Affect" setting to be "vertices" and not "edges". You can think about edges if you’re trying to break up two faces into those two faces plus some number of segment faces. If you’re trying to break up two edges into those two edges plus some number of segment edges, then use vertices. This setting is toggled with [v] while in bevel mode.

  • Now you can specify the width with [w] followed by the size in your current units. Don’t think you can type .01m to get a centimeter; the m will be picked up as a mode change. If you make a mistake, after typing [w] and some numbers, you can use backspace.

  • If you want to change the number of segments you can use [s] followed by the number of intermediate segments you want.

Note that if you press enter during any of this the bevel will be somewhat committed but the full after-the-fact dialog box will pop up. It is actually quite helpful and a good way to interact with the settings to get what you want. I find a good strategy is to use the bevel tool keybindings as just described as best as I can and then when I accidentally hit enter or am sort of done, the dialog box pops up and I have a second chance to get it right.

Other things to keep in mind…

  • Profile should be 0.5 which just means not skewed towards one face or the other. Maybe a knife edge chamfer would be able to exploit this setting.

  • If you accidentally use [ctrl+b] in object mode, you’ll get some restricted render view box that sticks to your viewport. Use [ctrl+alt+b] to clear that up.

A good reason to use the bevel modifier besides the normal advantages of modifiers (i.e. deferred procedural geometry) is that it does not require or even encourage a scroll wheel. If you do not conveniently have one handy, this modifier can be a big help.

If you only want a subset of your object’s geometry affected by the bevel operations, just define a vertex group (Edit Mode → Mesh → Vertices → Vertex Groups → Assign To New Group). After that the vertex group will probably be gone so you might want to Remove From All in the same menu.

This tip discusses and shows both interactive and modifier methods. Here’s a very good video that shows some advanced topics with the bevel modifier. Full official details about the bevel tool.

Boolean Composition

A very powerful way to construct complex but realistic geometric shapes is to compose them as a series of additions and subtractions of simpler shapes.

The operation I find most useful is subtraction. Imagine an apple as shape A and a bite out of the apple as shape B. The Apple logo could be described as A minus B. To achieve this…

  1. Select the A shape.

  2. Go to the modifiers wrench in the properties bar.

  3. Choose "Add Modifier→Boolean"

  4. Change "Operation" to "Difference".

  5. To the right of the Operation click the "Object" button.

  6. Select the B object from the list.

  7. View the previewed change with wire frame mode.

  8. Commit to the change with "Apply".

  9. B may disappear and A will be suitably modified. Or B will still hang around and you have to manually erase it leaving the subtracted A.

The other modes work similarly.

There is an addon called BoolTool that ships with Blender that dramatically streamlines this janky workflow by putting a sensible direct menu right in the n shelf.

Troubleshooting Boolean Operations
  • Are all the scales applied? [C]-a, choose Scale. Do this in object mode for the modified object and any target objects.

  • Are your normals nonsensical? You can turn on visual representations of them from the overlays menu (Display Normals - "Display face normals as lines.") Those can be hard to see and then you have to adjust the normal fuzz to be tractable to keep from getting a huge porcupine ball. Often the better check in these diagnostic situations is to select the Face Orientation option in the Overlays menu. This will show the surface as blue on the normal side and red on the back (inverse normal) side. To fix normals, go to Edit Mode, 3 for face select and select the problematic faces (a for all if they all are) then [A]-n to bring up the normal menu. Choose Flip or Recalculate Outside. You can also just tab into Edit mode and press [S]-n to do a quick recalculate outside on all the faces (which are generally selected when you initially switch from object mode).

  • Are you sure your target objects are named or specified correctly. (I often name them "AAAA", etc., so they are at the top of the target pull down list, and I have had confusion with multiple cutting objects getting mixed up.)

  • If you’re using multiple boolean modifiers, try them in isolation to make sure each operation works alone.

  • It’s probably unwise to have cut and cutting objects have wildly different polygon densities.

  • Things can get messy when the cut and the cutter are almost coinciding.

Here is a good thorough article on this exact topic.

Build

This allows animating the appearance (or disappearance) of all of the faces in an object. Give it a starting frame and an ending frame and the objects will start at nothing and by the end frame create the whole object. You can use "Randomize" to have it fade in or disintegrate like it’s being teleported. If you want it to have a definite order search (Space) for the "Sort Mesh Elements" menu and choose the order you want the faces. "View Axis" builds farthest away from you first and fills in towards you so you don’t obscure the building. This might be good for an effect like leaving interesting tracks behind a vehicle.

Decimate

Only works in object mode. Ratio of 1 leaves 100% while 0 removes them all. Of course nothing happens with this modifier visually and you have to apply it to see any evidence at all that it worked. The nice bonus of this modifier for people like me who think computer graphics should be based on triangles is that the new mesh will try to be triangles. Yay!

Mirror

Unfortunately there are only 873 ways to mirror objects in Blender — all of them are pretty baroque.

The most basic non-modifier way is to just basically do a single axis scale of -1. If a mirrored copy is needed, duplicate the object first. The real trick is to mirror about some specific point. It can be done but like all point specifying in Blender, it can be way harder than it should be.

Here’s the process for modeling a single symmetrical thing. This is different than two distinct symmetrical bookend type objects.

  1. Model one half of your boat, car, person, etc. Let’s say the starboard side only for this example. Just leave the port side completely empty. Plan all your modeling to leave port blank.

  2. Round up all the center points (bow, keel line, etc) and make sure they are the same. Ideally if the boat is pointing toward positive X and centered on the origin then make sure that all the center points have a Y of 0.

  3. You must establish the pivot point. This will be the orange dot for the object.

    • [Tab] for edit mode.

    • Select a vertex on the centerline.

    • [Tab] for object mode.

    • [C]-[S]-[A]-c

    • Origin to geometry (if it balks check mode!)

    • Select the object to mirror.

    • [S]-d to duplicate it (or not to just invert some geoemtry).

    • "s" for scale.

    • "x", "y", or "z" for the axis to scale.

    • "-1" to invert the values.

    • Select the other one and [C]-j to join them if desired.

    • Fix normals. Try [C]-n or [C]-[S]-n maybe.

Note that the Mirror modifier is a way to generate mirrored geometry. This instantiates when the modifier is "applied" leaving you with new geometry on the object which can then be independently changed. It is reasonable to model a symmetrical thing with this unapplied modifier dangling the whole time. There is, however, a technique I like better using linked copies. I think the overhead is similar.

To make a proper and efficient symmetrical thing, say the hull of a boat, a decent technique is to create a linked duplicate (with [A]-d). This has all the placement and scale properties of an independent object, but its mesh data comes from the original from which it was cloned. Therefore you can make a linked duplicate and then scale that around an axis by -1x as long your geometry origin is on your mirror plane. This way you can continue to just work on one half (port) while the other (starboard) half takes care of itself.

Just don’t forget to make your pivot point the 3d cursor! See 18:37 here to see what I mean. (It defaults to the geometry Median Point.)

Mirror Naming Conventions

Blender has some deep dark functionality that can understand and do the right thing with symmetrical pairs of items — basically imagine a left arm bone and a right one. Messy details are here. Blender is pretty flexible about the conventions it accepts. But let’s be consistent and go with the one that Blender itself prefers when the "AutoName" feature is used.

Name symmetrical pairs of objects with the following convention.

  • Left - MyObjectName.L

  • Right - MyObjectName.R

  • Front - MyObjectName.Fr

  • Back - MyObjectName.Ba

  • Top - MyObjectName.Top

  • Bottom - MyObjectName.Bot

Screw

What the calculus books call "solid of revolution". This can do helices too (a little easier than the array modifier). Here’s a decent demo but this one seems reasonably straightforward. You basically need a profile to revolve and another object that has/is a sensible center.

Skin

This is a very neat modifier that can take some reasonably simple wire frame thing and puff it out into a full mesh. If you just apply this to a poly-line of edges you’ll get a square tunnel following them. (By the way, [A]-c will help you convert a Bezier path to some geometry.) Mostly this seems good for getting some rough stick figures and then getting some rough "clay" on them for further sculpting. Note that you can have it leave behind the stick figure’s sticks as proper Blender armatures.

Triangulate

In computer science there are only triangles but Blender is strangely shy about letting that be known. If you like triangles you can divide meshes into triangles with the triangulate modifier. This modifier works well and has several fancy fine controls. Remember when dealing with modifiers, you may need to apply this in object mode, but the results won’t be interesting until you’re in edit mode.

Another way to do this without modifiers is Mesh → Faces → Triangluate Faces or [C]-T. Here’s a Python command line for the same thing.

bpy.ops.mesh.quads_convert_to_tris(quad_method='BEAUTY', ngon_method='BEAUTY')

Another way which is wasteful but aesthetically balanced is to "poke". This puts a vertex in the center of a polygon and triangulates radially from it. This is also in Mesh -> Faces -> Poke Faces or [A]-P.

Solidify

This takes a two dimensional zero volume shell and gives it some thickness. It creates an inside and an outside. The exception is if you solidify a 1d thing with no area (a circle or line). Then you’ll get a thicker circle or line. The "Fill Rim" option makes sure the inside is completely enclosed; without it, you’ll get two disconnected shells, an inside one and the original outside mesh.

Wireframe

This takes a mesh, actually, probably only faces, and replaces the edges (and face) with a thin but solid shape. This makes the edges into more substantial geometry. You can also leave the faces by leaving "Replace Original" unchecked. I feel like you can’t do much with this to have custom wire profiles, but if you just want a simple lattice made from mesh quads or triangles, this can quickly make some striking geometry. Good, for example, for making an ornate lattice partition or lampshade. This could have very cool shadow effects.

Lamps

  • Point - Omnidirectional point (e.g. normal light bulb)

  • Spot - Directional point (e.g. theatrical spotlight)

  • Area - Light producing area (e.g. window, TV)

  • Hemi - Soft distant light (e.g. cloudy sky)

  • Sun - distant directional light (e.g. sunlight)

Diffuse is when light scatters as on a rough surface. Specular is where the angle of incidence is equal to the angle of reflection.

Addons

Add Curve: Extra Objects

I guess I wasn’t the only one marveling at the oversight of not being able to add a simple point. Or a line. Well, now you can! No more needing to collapse a cube — something I’m all too good at.

TinyCAD

mesh_tinyCAD comes ready to go. Just turn it on in addon preferences.

Find mesh_tinyCAD features by right clicking into the context menu in edit mode; it will be at the top.

Here are things it claims to do, all of which are sensible.

  • VTX - Identify intersections from crossing edges (X shape), or edges that would cross if one (T shape) or both (V shape) were projected.

  • V2X - Similar to the case with V shapes this puts a vertex at the projected intersection but leaves the original lines alone.

  • XALL - Add intersection points like the X shape but on a collection of crossing edges.

  • E2F - Extend an edge until it hits a face at what my system’s terminology called the pierce point. A good way to execute this is to select the face in face select mode, then change to edge mode, get the edge, then apply. So 3, pick face, 2, [S]+pick edge, RMB, choose E2F from mesh_tinyCAD menu.

  • BIX - For two non-parallel edges, bisects their angle and creates usable guide geometry (not part of original) representing that bisector. Note that if the lines are intersecting, it will double up the angle vertex.

  • CCEN - Construct a circle from any 3 points. Note that the 3 points may easily find themselves doubled up. Note that the 3d cursor will move to the circles center point which is extremely useful for reusing it as a datum once the circle object has degenerated into just an arbitrary, if circular, mesh. This can possibly help when looking for quadrant points for a degenerate "circle". Try creating the circle with 4 faces. Then the only problem is how to align this square with the axes. I haven’t figured it out, but there must be a way. Check out Align Rotation To Vector; don’t know if that helps, but interesting.

BlenderGIS Addon

Here’s an amazing resource that allows GIS data to be handled smoothly in Blender. If it’s working properly, it seems to be able to construct topo maps from nothing and populate them with simple building models. It looks so amazing that I am still interested even though it’s a struggle to install and get working.

Here’s a nice demo of it in action. Here’s how to install.

These modules are needed.

apt install python-gdal python3-gdal libgdal-dev gdal-data gdal-bin
apt install python-pyproj python3-pyproj

But wait! Remember that Blender has its own Python. So this apt install is nice, but doesn’t help.

BAD=~/.config/blender/2.92/scripts/addons/ # Blender Addon Directory
cd $BAD && git clone https://github.com/domlysz/BlenderGIS

Maybe restart Blender to be safe.

Go to Edit → Preferences → Addons and find BlenderGIS and enable it. Note the little triangle hinting that there are tons of settings you can play with to set up the behavior of the addon.

There is now a GIS panel in the 3d view properties menu. You should also have a GIS menu with "View Select Add Object" etc. Under that GIS menu, select "Web Geodata → Basemap"… I’m following these directions and… Well, dang.

Still, this looks very cool and worth keeping an eye on.

Drawing Tools

Note that there is a pre-set layout called "2D Animation" but its tab is not out by default. Click the + on the layouts bar to find it. It’s not required, but good to remember that it’s there.

Geometry Nodes

Official documentation is ok and worth checking with but by no means covering everything you might wish it did.

First thing to worry about is if the default noodle shapes are to your liking. I like them straight so go to Preferences → Themes → Node Editor → Noodle Curving and set it to 0.

The main problem I have with Geometry nodes is that I get a node that is plausible because it is named something like "Separate Geometry" and I want to do exactly that, so I add the node and now I need to plug in a Selection on the left; so what nodes produce a Selection on the right? I’ll try to list all the nodes organized by connection type.

Geometry (Bright Green)

Object (Orange)

  • Object Info - IN

Selection (Pink)

  • Boolean Math - IN, OUT

  • Compare - OUT

  • Input → Boolean - OUT

  • Set Position - IN

  • Separate Geometry - IN

  • Distribute Points on Faces - IN

  • Merge By Distance - IN

  • Triangulate - IN

  • Set Material - IN

Instance (Pink)

  • Object Info - IN

Attribute

  • Raycast - IN, OUT

Vector (Violet)

  • Object Info - OUT:Location, OUT:Rotation, OUT:Scale

  • Position - OUT

  • Combine XYZ - OUT

  • Input → Normal - OUT

  • Set Position - IN:Offset, IN:Position

  • Vector Math - IN, OUT

  • Separate XYZ - IN

ID

  • Index - OUT

  • Set ID - IN

  • Random Value - IN

Distance (Gray)

  • Raycast - OUT:Hit Distance

  • Merge by Distance - IN

Material

  • Set Material - IN

Scalar Value Float (Gray)

  • Math - IN, OUT

  • Separate XYZ - OUT:X, OUT:Y, OUT:Z

  • Value - IN

  • Grid - IN:Size X, IN:Size Y

  • Combine XYZ - IN:X, IN:Y, IN:Z

  • Compare - IN (x2)

Scalar Value Int (Olive Green)

  • Points - IN:Count

Collection (White)

  • Input → Collection Info - IN

Color (Yellow)

Geometry Node Examples

Here’s a very clever use for geometry nodes. When doing serious projects it’s easy to get very heavy assets that can bog down live editing. This very clever trick (from this video) shows how to have your geometry substituted out for a low poly place holder automatically.

hi-low-asset-switch

This presentation expands upon the idea by suggesting a switch on a Delete Geometry node that selects random points to not show (using Utilities → Random Value node → Boolean). This allows you to develop with a very lean set of points, and then flip the switch for full detail final output.

Textures And Materials

Need a seamless texture? Use Krita and View → Wrap [Ctl,w].

Need to delete a texture? Sometimes they pile up or they come with imported models you download from elsewhere. How do you get rid of them when you’re not going to use them at all? The answer is the sub mode of the outliner called Blender File (normal view is View Layer). Get that displaying and choose the materials. Then you can select (multiples are possible too) the materials you don’t want, right click and delete. Also note the "special menu" (shaped like a down V) in Grease Pencil mode has a "Remove Unused Slots" which gets rid of such cruft (demonstrated).

Or another way if that’s not working is putting this in a Python console.

for m in D.materials: D.materials.remove(m,do_unlink=True)

Hit enter twice and hopefully that works. I’ve had it get rid of most Materials. Go figure.

Using A Transparent Image As A Material

Seems like this shouldn’t be too hard but there’s some node juggling that is required. Start with the Texture → Image Texture node. Put your transparent PNG in there; no unexpected settings. Next you need a Converter → ColorRamp node. Both of those will feed into a Color → MixRGB node — Color from the Image node goes to Mix’s Color2 and Color from the ColorRamp goes to Color1. If you get that backwards, you’ll get a reversed negative which may be a useful effect if that’s what you really need. The Color output of the Mix node goes into the Base Color of a normal Principled BSDF node. The BSDF node of that goes finally to the Surface of the Material Output node. Changing the Color bar of the ColorRamp changes what the background that’s not the image stuff looks like; note that each gradient position on the top color bar thingy has its own color.

That’s one way to do it. While working on a texture for tracer bullet fire I found another way. This guy’s succinct video seems to work and has a hilarious ending.

The process that worked for me:

  • Texture Coordinate node’s UV output feeds a Mapping node’s Vector input. This allows you to animate the position of this decal or overlay.

  • It’s Vector output goes into the Image node’s Vector input.

  • The Color output goes into a Base Color on a Principled BSDF shader node.

  • The Alpha of the Image node goes to the Fac on a Mix Shader.

  • A Transparent BSDF shader node stands alone with only its output going to the first Shader slot of the Mix Shader.

  • The second Shader input on the Mix Shader comes from the BSDF output of the Principled BSDF node.

  • The output Shader of the Mix Shader node finally goes into the Material Output node’s Surface slot.

Update for Blender 4

I found that this task has been made much easier in more recent Blender versions. In theory the old ways should still work, but you don’t even need to do that. Just the obvious alpha from your image to the Alpha of the Principled BSDF and that’s it. Like it should be. But! That actually doesn’t quite work out of the box — and neither do the old methods. You have to go to the Material tab and then Settings and then set Blend Mode to Alpha Hashed. I had a project where I wanted my decal to cast shadows and was disappointed that it didn’t work. But then I saw Shadow Mode - turn on Alpha Hashed for that too and it will all work! Super easy!

Here’s a 24s video that shows this.

Applying Multiple Materials To Parts Of An Object

This is easy unless you forget the weird process. Which I have done.

The trick is to focus on the "Active Material Index" (tool tips will say that when you’re over it). This is at the very top of the material tab accessible with the pink checkered ball icon. This is not the "Browse Material To Be Linked" box with its own white checkered ball icon. No, you need the top one. You must add all new textures you plan to use on an object here. Click the + to do this. It is in the far upper right of the tab just under the pin.

When you have the multiple materials ready there, you can select faces you want to apply a texture to, then highlight the correct texture in the index, and then hit the Assign button.

Object Appearance During Editing

There are many helpful modes in Blender to control how stuff looks but they can be hard to find.

  • Permanent wireframe - If you want an object to always show its wireframe outline, maybe only that, no matter what mode is selected, try: Object Properties → Viewport Display → Wireframe It can be very helpful when trying to carve something up with the boolean modifiers to make the cutting tool easy to see through.

  • Want objects to be colored randomly so they’re easier to distinguish? Try z,6 to get solid mode. Then look for the "Viewport Shading" option pull down — a V icon near the render mode ball icons in the top right. The Color → Random.

  • Put a texture on something without running the full render? You can do the same as Random, but choose "Texture". This is slightly different than z,2 "Material Preview" mode. Both are lightweight and helpful.

Remeshing While Preserving Texture

Imagine you just did a photogrammetry extravaganza and you have a model with a zillion polys. Everything looks good but you’d like to economize that mesh a bit. But you don’t want to lose the detailed texture that the process managed to pick up. The general procedure is described in this video.

The first thing to do is duplicate the heavy model. Rename one "Thing_ORIGINAL" or something like that and the other "Thing". You’re eventually going to get rid of Thing_ORIGINAL but hide it for now. With Thing selected look for the Object Data Properties tab and in that look for Remesh. I used Voxel and a "Voxel Size" of 2.5mm. and an Adaptivity of 250mm. I had Fix Poles, Preserve Volume, Preserve Paint Mask checked. Click the Voxel Remesh and come back when it’s done. Inspect that and if it’s not quite satisfactory, try some different settings. The Quad remesh is also interesting and tries harder to have lined up neat quad faces but it’s less sensible about adhering to certain kinds of complex geometry. There is also a Remesh Modifier that can probably do this too. I almost feel like that’s faster since you can check what the settings will do quicker.

Great, the mesh is the way you like it, but the texture is lost. Basically you need to bake a texture rendered from your original to the new UV map for the new mesh. The first thing to do is make sure "Thing" has no texture. It may be pointing to an old image for the original and that will be comically wrong. Bring up the UV Editing workflow tab and create a new bitmap in the image viewer. Change the size to 2048x2048 or whatever you think is necessary. Then with Thing selected and in Edit mode, use u to bring up the UV Mapping menu and choose Smart UV Project. Hopefully you see the new mesh’s UV map somewhat sensibly laid out on the new blank image that Thing’s material refers to. Once you get there, it’s time to project.

With both of the Thing objects visible (though it probably doesn’t matter), you need to select both of them such that the new Thing is the bright orange and the old thing is the dark orange. I found that means select Thing and then Thing_ORIGINAL even though other selection processes in Blender seem to do it differently. Next go to the Render Properties tab and make sure you choose Cycles for the Render Engine. This is temporarily required to do this process and will present a Bake subsection. The Bake Type is Diffuse since you’re really trying to capture the actual light bouncing off this thing which includes its color. Under Contributions make sure only Color is checked. Now make sure Selected to Active is checked and since you were careful with the selection order, this should be ready. I left Cage unchecked and made an Extrusion length of 10mm. The Max Ray Distance was ok at 0. Now click the Bake button and take a break while that completes. If all goes well, you should have new pixels on your texture image that neatly paint the new mesh.

Delete the original and the original meshes if you like. Probably smart to save or embed the new texture bitmap.

Maps Related To Appearance

Note that with Node Wrangler enabled, you can use [Ctl+Shift+t] to automatically setup maps. The addon will look for common names and this behavior can be refined looking at Preferences → Addons → Node: Node Wrangler → Edit tags for auto texture detection in Principled BSDF setup.

Color Maps

This is the most basic idea of rendering a surface based on information in a map. The map in this case is basically just an image or photograph which gets applied to the surface. It’s pretty simple and pretty effective and the first place to start.

An "albedo map" is very similar to an ordinary color map except that all the shadows and highlights have been removed. This allows your rendering engine to create more details accurate to the scene instead of a part of the texture collection process. It can be used where a "diffuse" input is needed.

Plug an albedo or color map (in that order of preference) into the Principled BSDF "Base Color" property.

Default naming: diffuse diff albedo base col color

Environment Maps

For specific implementation details on how see here.

Also known as reflection mapping. This is especially important if you are not staging shots entirely enclosed within a full model of a room or cave or similar. If, for example, you have a model of a car outdoors, some light rays (from the sun or street lights or other cars' lights) will bounce off the car’s reflective surfaces and head out in the direction of the infinite cosmos. An environment map answers the question of how to render that ray on the reflected surface.

These are often high dynamic range images [HDRI] scenes stitched together from multiple source images to map to a cube. In games this cube is generally called a skybox.

Procedural "Mapping"

If you use an image (i.e. a 2d array of values) to play texture tricks at render time, it can be properly thought of as mapping each pixel value to properties intended for a region of the geometry. However, if using procedural texturing tricks, the precision is arbitrarily calculable. Consider using a Minecraft texture to render a cube; if you zoom into a 1/32nd of the cube that region will be homogeneous. However, if you use a procedural approach with sin(X), then no matter how much you zoom in, there will always be subtle differences at least mathematically. This is why procedural approaches may not technically be "mapping" per se.

Bump Maps

Broadly speaking, the official definition is a texture mapping technique that simulates bumps. So this actually includes normal maps. Often it is used to refer to single channel maps.

Default naming: bump bmp

Height Maps

Wikipedia seems to think that black (#00000) is typically the lowest value and white (#ffffff) the highest, which makes sense if you imagine a "floor" being zero. It is conceptually possible to use the extra channels of a color image to get a finer level of displacement if 256 levels are not enough.

Default naming: height heightmap

Displacement Maps

The concept here is that this kind of map is used to actually modify geometry to create more complex geometry. In other words, this is not an illusion to jazz up simple geometry; it is a way to create proper complex geometry. A good example is taking a topo map and making real terrain. You really want the real terrain modeled but the information about how it is shaped is in a 2d format. If you’re going to look at the texture from the side and its actual geometry will need to interact properly with the background (e.g. you’re on the ground looking for the sunset between peaks in a mountain range) this kind of map is useful.

Displacement maps can be helpful if your target surface is so rough and chunky that you really need that extra geometry to be there casting shadows or defining tangential silhouettes. For example if you have a round castle turret composed of big rough stones, a displacement map might be needed to keep the whole thing from looking like a cleverly painted cylinder. Another example is rock strewn landscapes.

Default naming: displacement disp dsp (Note that Node Wrangler puts this in the same category as height.)

Normal Maps

The idea with normal maps is the direction a light ray reflects off a complex surface is carefully captured at asset creation time so that it can be more efficiently deployed at render time. Instead of having to investigate underlying geometry to calculate how light hitting a particular point would reflect, the rendering engine can just look it up on a map that explicitly contains that information. This also allows geometry with relatively few polygons to show smooth surface details as if it were more finely modeled. Normal mapping is used heavily by AAA game assets.

Normal maps are full three channel color images with each color being one of the normal vector’s component values. You can imagine a grid of unit vectors all pointing various directions; it turns out that a perfectly reasonable way to encode, manipulate, and store exactly that kind of information is simply in a familiar color image format. A purple color (no green component) tends to represent areas that are parallel with the original underlying flat surface.

Normal maps are useful to get subtle bevels and less sharp edges since the normal vectors provide more information about how the surface transitions from one region to the next.

Surfaces rendered with normal maps suffer from not having the full range of realistic dynamic lighting effects. They also look flat when viewed obliquely — the more perpendicular the view, the better the normal map illusion. They also do not cast shadows like proper geometry would. Omissions like this can cause normal maps to suffer an illusion where the texture’s concavity or convexity is ambiguous without supporting context.

To use a normal map, plug an image node of the normal map into the "Color" of a "Normal Map" node. Then run the "Normal" vector from that to the "Normal" vector of the Principled BSDF node. Make sure to set the "Image Texture" node’s "Color Space" property to "Non-Color Data".

Default naming: normal nor nrm nrml norm

Ambient Occlusion And Cavity Maps

Blender has a shader node called "Ambient Occlusion" in the "Input" category. What this means is that some parts of the scene tend to shade other parts somewhat even when still visible. Wikipedia has a nice example of a grid of cubes separated by some distance like buildings on a street plan. The sides of the buildings will be darker than the tops even though everything is visible and exposed to the same light and possibly even at the same angles. This helps create proper dark shadows down where the buildings meet the street that otherwise would have no reason to have its brightness fall off.

It looks to me to be very roughly analogous to highlights, but in the opposite direction — lowlights. It makes clear the special places on a model where light can not bounce around as freely.

Normally this is procedurally computed at render time but it probably can be baked into a map too.

Cavity maps are a very similar idea and are mostly used to represent where dirt would likely build up in a model. Blender even has some specific magic tricks for this. Under the Texture Paint workspace select the Vertex Paint interaction mode. Then find the Paint button and look for Dirty Vertex Colors. This should create a vertex map of cracks and places dirt would likely accumulate. I did exactly this with a Suzanne head and a default Blender scene and it worked pretty well. More details on this technique are described in this video.

Default naming: ao ambient occlusion

Reflection Maps

This hints at where reflections are strongest. Note that in Physically Based Rendering (PBR) engines, this map is superfluous and ignored because that is precisely what the engine calculates. But maybe more important for pre-baked game assets and specialty rendering situations.

Roughness, Specular, Gloss, And Metalness Maps

A roughness map is used to help the rendering engine know where light cleanly bounces off and where it might get some random bounce angles. This could be used, for example, if you have a polished surface (low roughness) that has a scuffed or sandblasted section (higher roughness).

The roughness map can be plugged into the "Roughness" property of the Principled BSDF shader node. Make sure to set the "Image Texture" node’s "Color Space" property to "Non-Color Data".

Default naming: roughness rough rgh

Gloss is essentially the same as roughness, but inverted.

Default naming: gloss glossy glossiness

Closely related is a specular map which defines the specular property of a surface. The subtleties are subtle, but generally paper is highly specular, with reflections going every which way and metal is not, with reflections going the same angles as the incident rays. Both specular and metalness are similar and competing workflows. Usually only one is used.

Default naming: specularity specular spec spc

There is a map similar to specularity for metalness that defines where a surface is metallic and where it is not. This can be useful for a rusty metal texture where the rusty parts are not behaving optically like "metal" but some of the shiny parts are. Another good example would be a circuit board where most of it is not metal, but the leads can be highlighted as metal. This kind of map can plug into the "Metallic" property of a Cycles' Principled BSDF node.

Default naming: metallic metalness metal mtl

Some new systems (Unreal 4 apparently) have a provision for "fuzz maps" which help define where cloth and fibers are fuzzy.

Opacity Or Transparency Map

You can have a map showing where your surface is transparent, basically mapping the amount of light that can get through. This can be handy for a car sun roof or other kind of window. Maybe a lattice. Maybe dirt on glass, etc.

To use this kind of setup, plug your opacity map into the "Factor" of a Mix Shader Node. The bottom "Shader" property of the "Mix Shader" can connect to the Principled BSDF result of your render (i.e. adding transparency is done at the end of the pipeline). The top "Shader" is hooked up to a "Transparent BSDF" shader node. The output of the "Mix Shader" node’s "Shader" can go to the "Material Output" node’s "Surface". Or… that technique may be needlessly complex and obsolete. You can also try taking the Image node "Color" of an opacity map and plugging it to the shader’s "Alpha" property.

Default naming: alpha opacity (Also note transmission and transparency maps are something Node Wrangler knows about.)

Emission And Subsurface Maps

An emission map can define where a surface is giving off light. Perhaps useful for lit details like LEDs or skyscraper windows or vehicle lights or something like that.

Default naming: emission emissive emit

Presumably a subsurface map can help define where regions are doing interesting things beneath the surface of an object. Perhaps this kind of map would be good for backlit vasculature or something like that.

Default naming: subsurface sss

UV Map

A lot of resources help you make a UV map from geometry so you can go paint it in Gimp. But my problem is usually trying to wrap a known image onto my geometry in a way that seems sensible. For example, if I have an image of some tiles on a floor and I want to apply them so they’re roughly like they are on the floor, how does one adjust the UV map?

Tips:

  • I generally split the screen so that one of my windows can be the UV Editor.

  • The (vector) geometry in the UV Editor comes from the selected object, but only when in edit mode. So make sure you have an edit mode active and the geometry you want to unwrap selected.

  • Sometimes you have a long rectangle that is wrong. Don’t be fooled by how the UV editor unwraps it by default. Sometimes you have to turn it 90, maybe even in a direction that looks wrong to the scene. Remember that the 2d unwrapping is a very different thing than the 3d model.

Camera Tracking And Projection Painting

This video introduces camera tracking which seems to be a kind of photogrammetry with more hand work. You help Blender identify some tracking points in various frames and a camera solver can reconstruct where the camera must have been and what its parameters (error, lens, etc) were. This allows you to do amazing things with the 2d image that combines elements of a 3d model.

Here is part 2 which shows how a random scene with naturalistic trackers can be tracked and an objected added. Part 3 is very long but comprehensive. And part 4 has examples explained more thoroughly.

This video shows how to remap textures from a 2d photo into a clean perpendicular view, including unwrapping cylindrical objects like tree bark.

Here’s another one showing panorama stitching up to the level of spherical environment maps.

Part 3 in the series shows how to do projection painting to remove an object from a video. Pretty amazing.

Same guy doing an amazing job of facial motion capture in Blender. This seems to have implications for other 3d reconstruction tasks.

Here is another guy using this to project something onto a monitor (plane tracking).

Tips: It seems like all of these techniques really get better results from changing Render Properties → Color Management → View Transform → "Filmic" to "Standard".

Here’s another very competent guy doing a full end to end demonstration of putting a face mask on footage of a live action actor. Amazing. A great inspiration on what this can accomplish with many great workflow tips.

Procedure

  • Render Settings → Color Management → View Transform → Default. Do not use "Filmic". This is widely reported to be important for success.

  • Convert source footage to individual still frames. A JPEG sequence with 100% quality seems fine. If still production seems glitchy see remedy here.

  • Most of this tracking stuff is done in the Movie Clip Editor ([S]-F2). Choose 30fps because since we now have stills, it doesn’t matter.

  • "Set Scene Frames" to match project length (number of stills). About 200 seems like both a sufficient and tractable quantity.

  • "Prefetch" loads the entire sequence into memory to avoid delays and playback stuttering.

  • Render Settings → Color Management → View Transform → Default. That’s right, double check this. Apparently it is quite important.

  • In the Tracking Settings (in the left menus) check Normalize.

  • Set Correlation to .9. I.e. continue tracking attempt if it’s 90% confident it hasn’t lost it. Go quality or go home!

  • Normalize in the Tracking Settings can be useful to make the tracker invariant to some lighting changes.

  • Drop a tracker with [C]-LMB. Unhide its search area with [A]-s.

  • There’s also the "Track → Marker → Detect Features" button which can do a bunch of this tracker assignment automatically. It can even stay within an annotated region (get one with [A]-d and draw a boundary) by choosing the correct "Placement" option in the "Detect Features" properties that shows up in the lower left (F9 position). The more trackers the merrier, but 8 is the minimum for a full solve.

  • In the track window if some particular color channel is messy you can disable it to only focus tracking on the clearer channels.

  • [A]-rightarrow tracks forward to the next frame. Maybe [C]-t tracks forward? [C]-l locks all trackers after the tracking?

  • Save to no lose tracking progress during intensive solving calculations and playback, etc.

  • You can go to the "Graph" view which is in the pulldown that probably has "Clip" slected — between the "Tracking" and "View" items on the second row of menu cruft. Once in graph mode, you can look for trackers that don’t quite seem to know WTF is going on. It may be smart to use "x" to delete the ones that don’t synchronize with the consensus. If you’re very low on tracking points, you can maybe unlock the ones that are bad and manually sort out the problem spots.

  • Under Plane Track in the left menu, you can choose "Create Plane Track". Then you can drag its corners to four trackers that are defining some kind of plane, perhaps the ground or a screen, and that should be a helpful reference.

  • The Keyframe A and Keyframe B settings should be two positions where the parallax can best inform the algorithm of the orientation change. These are used to calculate the initial model and are propagated to the other frames so it’s good to have the biggest difference in pose between these.

  • With 4 trackers you can make a plane. If you’re a bad ass you can project that plane with more tracking features but the normal way is to collect at least 8 trackers from the original footage. Once you have that you can try the Solve Camera Motion. This basically tries to look at all the points you’ve identified in a lot of changing 2d frames and calculate (solve) where the camera must have been (and been pointing) to achieve that result. The "Solve error: 0.8637" at the top right is how well it did. An error of 0 is perfect (and generally unattainable in the real world). .3 is very good, .7 maybe useable, 1 and above is janky, and more than about 3 is probably unusable. I get the feeling that this error value is measured in pixels — I don’t know exactly how, except for object solving. For object solving using a stationary camera on a tripod, the tolerable error can be a little higher, maybe 1 pixel.

  • For general solving you might want to change the "Refine" setting in the "Solve" panel from "Nothing" to "Focal length, K1, K2"

  • To view the results, get a 3d viewer window and then back in the Solve panel click "Setup Tracking Scene". At first it will be muddled looking. Select 3 trackers and under "Orientation" choose "Floor". This will align those three markers to the floor plane. This should help everything get sensibly oriented. You can also choose 2 trackers whose distance apart is known or guessable and enter the known distance and pick "Set Scale". You can also pick your favorite tracker and use "Set Origin" to do that. Then choose a tracker to the "right" and choose "Set X Axis".

  • You can also go into camera view in the 3d viewer. With the camera selected go to "Camera Properties → Background Images → Add Image → Background Source → Movie Clip → Active Clip" (yes, even if you’re using stills.

Shading

Regular Glass

I have struggled with ordinary glass panes in windows before. This tip is quite good and does seem to do the job very well. Here is the summary of what that looks like.

Sensible Glass

Rendering

Background

Fancy people setup every part of their virtual universe but I’m often just trying to show off some single artifact I’ve been working on. The default dark gray background that shows up on renders is often at odds with dark items. To change the way the rendered background looks when it is empty, go to "Properties → World Properties → Surface → Surface" and set that to "Background". Now you can change the color and "strength" of the background. Note this does not do anything to make wireframe or solid view look any different.

Apparently to change the background of the 3d Viewport for wireframe and solid mode, you can go to Edit → Preferences → Themes → 3D Viewport

The color of the grid lines can be changed here. But keep scrolling down… farther… keep going… almost there… until you get to an entire section questionably called Gradient Colors. In there you will find Theme Background Color which has options. By default the eponymous "gradient" is turned off, but turning it on can be a festive thing to do. In this context, I get the feeling that "low" means the bottom of the screen, not a color property; high is the top.

Noise

One of the most frustrating things about trying to get high-quality realistic rendering is noise artifacts. This manifests as a sparse field of bright pixels where that is not accurate. Here are some places to start when trying to figure out how to minimize this problem.

  • The new blender.org videos are fantastic and this video explains all you need to know in about 3.5 minutes. Start here.

  • Blender doc about dealing with noise.

  • Excellent discussion of noise and how it can be tamed using Cycles.

Exporting To Unity

I think this process has been streamlined with some necessary transformations automatically sorted out. Now just doing the export step works.

  • File → Export → FBX → Write a FBX file

When imported into Unity, it opens and looks right in the scene. Here are how the axes are adjusted.

Blender

Unity

Direction

+Z

+Y

Up

-Y

+Z

Forward

-X

+X

Starboard

The location of the imported asset seems correct when placed exactly by setting the Transform Position attributes in Unity.

It may be wise to set the units in Blender to match what you’re trying to achieve in Unity.

Hmm. Yes, there still are problems. The two clues that things didn’t work quite right are that in the icon of the model asset, it is pointing up. When it’s placed in the scene it seems correct, but that icon is not consistent. And the position is just a little bit off by a weird amount.

This video is quite specific about what the problems are and what you can do to minimize Blender/Unity conversion problems.

I did the easy first method of clicking the "Experimental transforms" box during the FBX export. That seemed to cure problems.

Partially

Use [C]-b to select a box to render. This puts the powerful and resource hungry rendering engine to work only in this region of interest.

To clear this rendering ROI box [C]-[A]-b. This is important if the ROI box was done in the camera because that is how the final animation will be done.

In the Sampling section of the Render properties, the Samples box contains two fields, "Render:" and "Preview:". The Preview one is for what is rendered in the preview box set with [C]-b. This can be very helpful to determine the level necessary for a more complete render.

Camera Positioning

If you really have your act together, you can be very explicit of course, but I often need to move around looking at the scene as the camera will see it. To do this easily, use [backtick,1] to get to the camera view. Then go to View → Navigation → Walk Navigation. Now you can use WASD to pan around (also E for up and Q for down) and the mouse to aim the camera. Once you think it’s a decent shot, press F12 to render it.

Another one that sometimes comes up is what I’ve seen other programs call "twist". This is where the camera is rotated about its long axis. To change this — perhaps while looking through the camera view, or not — select the camera and [r] for rotate, and then [z,z] to align the rotation about the local Z axis which is the direction the camera points. If you don’t really want the camera to be changed but just need this effect in the viewport view, you can use View → Navigation → Roll Left/Right.

Camera View Properties

Although it is a somewhat exotic application, one thing Blender can do really well is show you roughly what some real cameras would have in frame by matching those cameras in the model and rendering. The render is exactly the same (or trying to be) as what a real digital camera of the same resolution is seeing. The trick then is how to match your Blender virtual cameras to the exact specifications of your real ones. This Q&A addresses the issue. The basic strategy is to go to the (SLR-looking camera) Render icon and go to Dimensions and set the pixels there. That sets the aspect ratio. Then go to the (Movie camera) Data icon and look for "Field of View". This field of view angle seems to apply to the axis with the largest number of pixels and the other axis is scaled appropriately. This means a 1000x2000 render with a 90 degree FoV will have that 90 degrees in the vertical orientation.

Freestyle

Freestyle rendering mode is a way to achieve interesting non-photorealistic effects in final renders.

One problem I had was where I had a very simple blocky shape with simple 90 degree angles and on a couple of the straight lines, there was a strange fluctuation of width. This made no sense, but I tracked it down to a particular setting. In the "Freestyle Line Style" section there is a property called "Chaining" which in theory can link geometry together. Turning this off cured the problem. I never figured out what it was trying to chain or how since the defective features were single edges with no complexity whatsoever.

Performance

Also note that you can change the level of detail of the rendering to speed up test renders. To find this go to the "Properties" bar; choose the "Render" button which looks like a camera; look for the "Sampling" section which is between "Freestyle" and "Geometry"; in this section look for the "Samples:" fields; specifically look for the "Render:" field. A value of perhaps 1024 is pretty decent and takes a while and a value of 32 or 64 can make things go quickly for previews. I don’t know exactly why sometimes this Sampling section is absent but this seems to be the important thing to adjust for grainy quality problems.

If you have access to a fancy GPU, rendering might be faster. But it might not. This is not a slam dunk in the Blender world like in other applications. If using a GPU consider adjusting the values in "Render→Performance→Tiles→Center→(X,Y)" to something larger than the normal "16" that seems pretty decent for CPU rendering. Note that if you do have a fancy GPU that is competitive or superior, you can run two instances of Blender, one rendering with the CPU and another with the GPU. I can’t easily think of a better way to punish a computer’s entire capability!

Single Image

Sometimes you don’t need a video and you just want a still render of your scene. Go to the UV/Image Editor window type at the bottom of the rendered image. Got to Image and then Save As Image. Also F3 works.

Note that default Blender 2.8+ seems to want to open a completely new window when asked to render. This can be fixed in the preferences. Look for this.

Interface → Temporary Windows → Render In → Image Editor

To Images

It often makes a lot of sense not to render to a full video file. If you have hundreds of frames and Blender crashes at frame 300, if you’re rendering directly to a video file, you’ll need to solve the problem and start from the beginning. Rendering to images, therefore, usually makes a lot more sense. This also allows you to distribute the load of rendering between many different computers.

The important trick is how to assemble the still frames into the final video file that you want. You can use another tool like this.

avconv -i %04d.png -r 24  -qscale:v 2 xedlamp.mp4

This will take everything named something like 0003.png and create the video file. Also explore ffmpeg if that’s currently in vogue on your system (see my video notes for annoying details).

Here’s another trick to get an animated gif.

convert -delay .033 -loop 0 lamp*.png xedlamp.gif

Or here are some fancy ImageMagick options.

convert -quality 5 -dither none -matte -depth 8 -deconstruct \
-layers optimizePlus -colors 32 -geometry 400x -delay 2 -loop 0 \
frame*.png a.gif

Note especially -delay 2 which seems to be the minimum value. Anything less will make, strangely, add much more delay ref. Also note that if there are errors, the optimizer might need more room. See this.

You can also use the Blender video editor and "Add" "Image", select all the images and eventually export it when you’re happy. See video editing below.

Making Animated Gifs

I think it’s a weird omission that Blender doesn’t have a native backend (that I know about) for animated gifs. But that’s fine. In this rare instance of Blender being frugal with features, there are plenty of ways to do the job.

First decide if you want a transparent background. This isn’t about gifs per se, but it tends to be a popular idea with gifs. Go to Render Properties (the camera tab) and look for the Film section. There should be a simple toggle for Transparent. Now back to Output Properties (the printer tab) make sure Color has RGBA if you’re going for transparent. Or not if not.

Set your Resolution X and Y as normal and pick File Format as PNG. Make sure you have your directory path (top of Output section of Output Properties) set to something sensible; I like /tmp/R/ for "render". Now Render → Render Animation from the top bar menu ([C]-F12, and remember you must use a real Ctrl key, not a remapped one).

Now you have a collection of PNG frames in /tmp/R/. How do you convert those to an animated gif? The most scriptable technique is to use gifsicle. Unfortunately, this program only assembles gifs from gifs. So you’ll need a process like this.

#!/bin/bash
# Start with a directory of still PNG frames as supplied by Blender.
# End with an animated GIF.
# You might need: sudo apt install gifsicle gifview imagemagick
P=/tmp/R           # Make sure this path matches your output directory.
O=/tmp/myanim.gif  # Set this to whatever you want or rename later.
cd $P
echo "== Converting Input PNGs..."
for F in *png; do echo "$P/$F -> $P/${F%%png}GIF" ; convert $F ${F%%png}GIF; done
echo "== Composing Animated GIF..."
gifsicle -O2 --delay 5 --disposal previous --loopcount 0*GIF > $O
echo "== Inspecting..."
stat $O && identify $O | head -n1 # Show useful size info to confirm success.
gifview -a $O            # Check the work. "q" to quit.
echo "== Clean Up..."
read -p "[C]-c to keep stills, Enter to delete them."
rm -v *GIF *png    # If you're confident you didn't need the stills.

Note that gifsicle is a weird program with weird options. The delay seems to be in 1/10 seconds (so 50ms shown here). The disposal method may not be needed if you do not need transparent background. The output file is only found on standard output, so redirection is necessary.

Remotely

One obvious example for normal people is letting AWS do the heavy lifting for you. This way you can optimize the type of engine you use (GPU or CPU) for your project and get all of that heat out of your house. Step one is to set everything up exactly like it needs to be in your Blender project. Make sure that if you were using a GUI session you could just open the project and hit [F12] (Render Image) and everything would work perfectly. If that’s the case, log into your remote Linux system.

Rsync your project to the remote system. Run the render with something like this.

blender -b chessScene.blend -o //chessScene -F PNG -f 1

This will produce a file chessScene0001.png with no GUI fuss.

To do a complete animation use -a for the whole thing or pick specific frames with -f 1..20 or something like that.

blender -b lamp.blend -o //lampoutput -F PNG -a

The // syntax indicates relative paths from the blend file. You can also include explicit padded frame numbers with #, one per digit. For example test-##.png becomes test-000001.png. Sometimes you might want to just dump them to somewhere in temp. I don’t know if this is overriding the Render Properties but it might be. Seems you don’t need a -o if you’re happy with what’s specified in the .blend file.

Of course one problem with this strategy is that you can send up some geometry that is pretty lean and render it into lavish high frame rate bit maps; this can blow up the size considerably. So just think about that with respect to any transfer/storage fees that might exist. If you just happen to have access to an awesome private GPU machine, go for it!

Here’s a little script I wrote to help get a quick preview of full renders without doing all the frames. Lots of good information contained in here and will be a useful template for remote operation and custom project build scripts.

#!/bin/bash
# =========== Settings  ===========
# The Blend project to render.
FILE=./DYNAMIC-xed.blend

# Directory where rendered frames should go.
# This will be created if you forget to.
T=/tmp/renderQC

# Starting frame - ignore frames before this.
# Hand enters near 70. Ball drops around 113.
S=70

# Ending frame - do not render any frames after this.
# Ball exits at 335.
E=336

# Skip value - for test renders skip frames (e.g. odd ones, every
# 5th, every 15th, etc.). Let's you see if there are any obvious
# problems before spending hours working on the whole thing on every
# frame.
SKIP=6

# =========== Program ===========
mkdir $T # Just in case it's not already there.

STIME=$(date "+%s") # Starting time in seconds.
# This is the short options way if you ever need to cut and paste it.
#-------------------------------
# blender -noaudio -y -b -o /tmp/renderQC/ -s ${S} -e ${E} -j ${SKIP} ./DYNAMIC-xed.blend
#-------------------------------

# Here is the same thing but with less cryptic long options.

blender \
    -noaudio \
    --background ${FILE} \
    --render-output /tmp/renderQC/ \
    --frame-start ${S} \
    --frame-end ${E} \
    --frame-jump ${SKIP} \
    --render-anim

# Or cherry pick specific frames you want.
    #--render-frame 70 \
    #--render-frame 71,73,75,77 \
    #--render-frame 100..120 \ # Including 100 and 120.

# This will try to wrap up the generated frames into a video file that
# can be played back smoothly. This requires every frame to be present
# or at least they must all be renamed to be sequential.
#ffmpeg -i "${T}/%04d.png" "${T}/video.mp4"
#echo "Hopefully a video was created called: ${T}/video.mp4"

# If you have a sampled non-continuous frames for a quick preview, you
# can make an animated gif out of them.
# Note this is smaller for quick preview purposes. Use the same
# command but with geometry set to something other than 500 pixels wide.
convert -delay 5x30 -geometry 500x -dispose Previous -layers Optimize ${T}/*.png -loop 0 ${T}/sequence.gif

echo "Hopefully a gif was created called: ${T}/sequence.gif"
echo "View with a browser, or:"
echo "   gifview -a ${T}/sequence.gif"

ls ${T}
echo "If you see a list of stills they, were in: ${T}"
ETIME=$(date "+%s") # Starting time in seconds.
DURATION=$(( ${ETIME} - ${STIME} ))
NUMOFPNGS=$(ls ${T}/*.png | wc -l)

echo "That whole operation took ${DURATION} seconds."
echo "And you now have ${NUMOFPNGS} PNGs"

I called that script Command Interface Lights Out Rendering or the easy to remember ciloren (pronounced "Kylo Ren").

Note that running blender remotely from the command line may cause trouble due to a lack of an X windows system (answering system calls?). This thread talks a lot about the issue of Unable to open a display. I was finally able to get things going by prefacing my blender command with a DISPLAY setting like this.

DISPLAY=:0 blender -noaudio -y -b -o /tmp/renderQC/ -s ${S} -e ${E} -j ${SKIP} myproject.blend

Open Shading Language

OSL is used by the Cycles render engine. Examples of it can be found in /usr/share/blender/scripts/templates_osl.

Rigging

Rigging Resources

I’m mostly interested in simple machine simulations. This should be easier than rigging a human ninja, but strangely it is not.

Here is an excellent simple tutorial rigging a simple robot arm. Here is a landing gear featuring pistons and hard pivot joints.

Here’s a good video showing the normal rigging of a normal biped figure.

This whole series, Humane Rigging is very well done. I watched it all just to marvel at the amazing complexity that high quality rigging seems to require. This video series was nice because it was not "easy" or for beginners — it told the mathematical truth to many of Blender’s weird decisions. Overall very helpful to understand things even if you’ll never rig and animate a battle between a robot squid and a Medusa.

Danpro has a nice vehicle rig series. I think if you watch these 10 times, the ideas will start to sink in a bit.

Rigging Machinery And Robots

Here are some miscellaneous tips I’ve collected on how to rig mechanical systems. This could be anything from a robot arm to a canal lock. Although this is not the ultimate end goal of Blender’s rigging system — which tends to focus more on people and animals — it is actually quite tractable. The only difficulty is that this is not the primary work discussed in most tutorials.

When creating the bones, place them in the mechanical part that moves. For somethings this is simple; for example a backhoe might have a series of articulations in a line and the linked bones make sense. But in a robot arm, there may be a section where the end has a motor on it and the next section is at the end of that motor’s shaft shifted over some distance. Just create a new bone in edit mode or duplicate one and avoid extruding in this case. Then just parent the end bone to the one closer to the base using "Keep offset".

Many mechanical systems have very limited motion. If a robot arm joint only moves the elbow in one axis you can restrict the motion from pose mode by clicking the little lock icons next to the axes that do not rotate. It may be smart to lock off the scale too — no reason for that to change usually.

Y is the local axis that does a twist meaning the bone doesn’t move but the local axes of X and Z roll around.

Naming things well is always helpful. Here is a good convention I have seen:

  • DEF-link2-ball - "Deformation", the actual motion bone

  • CTRL_GLOBAL - "Control" used to control aspects of the rig

  • CTRL_IK-link3-hinge - Inverse kinematics control bone.

  • CTRL_FK-link2-ball - Forward kinematics.

  • CTRL_POLE-elbow-hinge - Pole target.

  • MCH_IK-link2-hinge "Mechanism", part of IK chain but not directly controlled.

Another good idea is to name the whole Armature something sensible (instead of the default Armature) — something like RIG-landing_gear.

For control only bones you can turn off the deform property in Bone Properties.

It is smart to have everything zeroed out before starting. Have your mesh sitting at 0,0,0 and all transforms cleared (e.g. [Alt+r] to clear rotation, [Alt+s] to clear scale, or use [Ctl+a], etc. ). Note that while you’re posing things in Pose Mode you can reset rotations with [Alt+r] back to the rest pose.

I’m often fine with using [Shift+z] to switch to the wire mesh display, but many people like to select the armature and then go to Object Data Properties (the little running stick figure) and then go to Viewport Display and click In Front (goes well with [Alt+z] partial transparency). Just under that is the setting for Axes which can also be helpful when troubleshooting, especially alignment/roll/twist problems.

To hook up the mesh to the armature you do not use the same kind of process you’d use with an organic model. With animals and people, you tend to parent the skin mesh(es) to the skeleton armature with [Ctl+p] and then With Automatic Weights. But with a rigid mechanical assembly, this is wrong. You want to use With Empty Groups. What this does is make groups for all the mesh components (the robot’s upper arm mesh object, lower arm mesh object, etc.) but leaves them unassigned. Automatic weights tries to smoothly distribute, for example, the upper and lower arm around the elbow. But for a mechanical model, you’ll want to assign an all or nothing approach yourself. Now after parenting with Empty Groups if you go to the Object Data Properties tab for each of the parented robot component meshes, you’ll see under Vertex Groups a list of all the bones. If you go to Edit Mode you’ll have the chance to select vertices — all is popular with mechanical rigs — and click the Assign button that appears in the Vertex Groups area. If some part of your mesh is moving with the wrong bone, you can try to Remove those vertices from that bone in the same way.

Note that when you assign 100% weights of a vertex group for an object to some bone, you are creating an Armature Modifier. Check for it in the modifier stack. This is ignorable if you have no other modifiers. If you do, however, it’s good to have this modifier calculate before any subsequent expensive ones are calculated. In other words, this will help put the simple mesh in the right place and then make it complex rather than having to do the latter and reposition a complex mesh. To move order of the modifiers, use the little down v button and select Move to First.

When making an Inverse Kinematics system, you are basically making a second armature that uses an IK constraint and there is a slider value between that armature’s influence and the normal forward kinematics one. Since the end will control things closer to the base it is not quite right that it is parented to it. For example, if your FK system allows positioning an upper arm, then a forearm, then a hand, that’s how it is parented. But if you want to position the arm by specifying where the hand is then the hand in the IK rig can’t be parented to that chain. You need to start by clearing the parent relationship of the IK armature’s "hand" (or whatever is controlling things in an IK way). To do that, with the hand bone selected, use [Alt+p] and Clear Parent. That hand bone in the IK chain actually must be re-parented to the base so that it behaves sensibly if the whole mechanism is moved; do the parenting in the normal way keeping the offset.

This series is really excellent for carefully developing a custom rig for a robot arm complete with FK and IK controls. The one I linked to is part 5 of an excellent 6 part series and shows the main tricky bits with the IK drivers.

Setting Mesh To Be A Posed Position

Often I create something like a robot and rig it and I use the rig in the normal way. But sometimes I just need my robot unrigged in a cool pose, perhaps as a static background prop. How do I get my robot all lined up neatly to take on the pose set with the rig and then forget about the rig?

The answer is to apply the Armature modifier for each mesh. Now that mesh is as if you put it there with normal transforms. Delete the rig if you want.

Four Bar Linkages With Bones

A 4 bar linkage is a good test of Blender’s ability to do sensible things with complex assemblies. Here’s a description of a simple process for setting this up.

  • Create an armature object with two connected bones (generally extrude the second bone in edit mode). Call this armature A. Call the head (fat) end bone "Bar_4" and the tail end bone "Target". Bar_4 will be the handle that you position.

  • Create a second armature like the first ([S]-d duplication works). Call this armature B. Call its head end "Bar_2" and it’s tail end "Bar_3". These are the bars that will be calculated and not explicitly set.

  • Mentally note that "Bar 1" is simply the span between the head of Bar_4 and the head of Bar_2 since those heads do not ever move. To visualize a practical example, imagine that Bar_4 is the door which you want to control in your scene; Bar_2 and Bar_3 are the moving parts of the door closing mechanism; Bar_1 is simply the door frame between the wall mount for the closer rods and the door hinge.

  • Move the tail of Bar_3 so that it is in the same location as the tail of Bar_4 (which is also the head of Target).

  • Select armature B and go to Pose Mode.

  • Select Bar_3 and look for the "Bone Constraint Properties" tab. Click "Add Bone Constraint" and choose "Inverse Kinematics". Choose armature "A" for the Target. Choose A’s bone called "Target" for the "Bone".

  • It might be smart to add a chain length of 2 since that seems to be correct. However, I noticed that leaving it at zero also did actually work anyway for some reason.

  • You should now be able to rotate armature A’s Bar_4 bone and have Bar_2 and Bar_3 do the right thing ("bar 1" is implied as part of the stationary part of the model).

  • Since you’re not trying to animate bones as the final goal, you’ll want to hook the rig up to your actual model. Select the real/rendered object that represents one of the moving parts, one of your part’s bars in the linkage.

  • Shift select the armature. That will allow you to change to Pose Mode with some armature object also selected as the non-active selection.

  • Now that you’re in pose mode you can and must select the specific bone with a shift click. Once you have the specific bone as the active selection and its real world object also in the selection, parent with [C]-p. Choose "Bone" as the parenting style (Not "Bone Relative").

  • Pose Bar_4 and enjoy!

4-bar linkage diagram

Rigify

Rigify is a 1st class addon present in stock Blender that seems to pre-emptively do a lot of typical rigging jobs so you can skip a lot of typical work. I found it interesting that you can add ([S]-a,a) a rig from bird, cat, horse, shark, wolf. I don’t know why those animals, but they could be helpful. There’s also a fancy human, basic human and basic quadruped to get you started.

CGDive has a superb video series that covers rigging thoroughly. Without a resource like this, it really is hard to make any sense out of how Rigify really works and how to get the most out of it. I found the videos on custom Rigify concepts to be exceptionally valuable — those are part 1 and part 2. (These links are updated for 4.0 Blender but he has an older 2.8 series that still has some value too.)

When you add a meta rig but it is hidden inside the model, you can select the "metarig" object (that’s Rigify’s automatic name), go to the Object Data Properties (the little running stick figure icon), then open the Viewport Display section, and make sure "In Front" is selected. Since I like wire frame mode more than normal users this bothers me less than it seems to bother every single Rigify tutorial creator; yet even I can’t understand why this is not the default.

It seems like it is important to make sure that you clear any lingering transforms on the model you’re trying to rig and also the metarig which you may have just tweaked to match your model. It seems that it may be sufficient but not necessary to make sure both are cleared (with [C]-a) but it does seem at least necessary to make sure they are not different.

The donkey work here is positioning the auto-generated metarig components (in Edit mode) inside your model’s exterior mesh. Here are some handy tips.

  • Hide bones you’re not using that are in the way with h. Unhide all the hiders with [A]-h.

  • To position an entire finger or other contiguous subsystem, don’t forget about selecting with l to get just linked things.

  • Consider X symmetry mirroring found on the far right of the top-most bar of the 3d Viewport editor.

If you use the meta rigs in a predictable way and keep them largely intact, instead of parenting the mesh to them as you do by hand, you can go to the armature properties and scroll down and find a "Generate Rig" button (must be in Object mode). This will let the Rigify addon really do a fancy job of rigging your model. Once the fancy rig is generated, you can parent the mesh to this fancy rig (use "with automatic weights"). In theory you should be able to (save a copy first and) delete the meta rig since its job is done; I found this helpful since otherwise it just sits there in the wrong position. Actually it may also be present in the layer #31 slot armature object properties → Skeleton → Layers grid. So safe to delete unless you want to retry completely (so Save Copy!).

Rigify General Controls

Once you have the Rigify controls generated everything should make perfect sense. Ha! Just kidding! It’s actually a bewildering jumble and what to do with the riot of crazy controls you now have is not at all clear. This video is quite lucid in deciphering the Rigify workflow after the rig is generated.

The red paddle controls seem to be the Inverse Kinematic (IK) controls. The idea is that you can put a hand or foot somewhere and the intermediate arm or leg joints go to a plausible position. This is complicated because there are many solutions to where the intermediate joints could be to achieve the effect. The green circle controls are the Forward Kinematic controls and they are more what you’d expect from a simple parenting relationship — move the upper arm and the lower arm and hand goes too. Move the lower arm and the hand goes, but not the upper arm. By default the IK controls are fully active and the FK controls are disabled. This can be fixed by playing with the slider labeled something like "IK-FK (hand.L)"; changing it from 0 to 1 makes IK inactive and FK becomes active. This can be very useful to initially pose your model so that major body positions are set up sensibly.

The red cogwheels seem to be connected to the red arrows. I think moving and rotating these arrows moves the whole set that includes the cogwheels, so no need to touch the cogwheels. It looks like these cogwheel/arrow sets handle rotating the joints of the shoulder and hip to control axial rotation.

The big box is the torso in its entirety. The yellow loopy things are the shoulders and hips.

Note that these colors are actually from something called "Bone Groups" found in the armature properties. You can make your own bone groups and give them custom colors. But at least you can go see what Rigify is thinking when it is color coding things.

One thing that can easily be problematic is that for IK pose controls, if you move the control beyond the length of the bone chain, the limb will stretch to make it work. This elongation is fine for Mrs. Incredible, but not useful for my projects. You can turn this off with the properties slider labeled "IK Stretch" which is present for IK elements (0 is don’t stretch, 1 is do what it takes).

When you do some zany IK motion the FK rig (e.g. green circles on the arm) may just float in some random non-posed way. The trick is to synchronize the two modes with the buttons labeled FK → IK and IK → FK. I had all kinds of problem accepting this notation into my heart because I thought that these were backwards to my way of thinking. I finally settled on the mnemonic that the operator in this case should be replaced with the phrase Defers To. Then it all makes sense.

The "Pole" stuff (as in "Toggle Pole") relates to a way to control IK bones with little balls that you move instead of rotating them. The "toggle pole" feature allows you to see the real pole target if the implied cogwheel stuff is not making sense.

"FK Limb Follow" on the torso basically decides if the extremities will have their angles locked to the torso or if they’ll try to remain in space like they are. This prevents counter-animating where you adjust the torso and now the arms are pointing in the wrong place and you have to reset the arms. Sometimes you want the arms to follow the torso, sometimes not. The "IK→FK (hand.L)" button basically resets the IK so that it’s not incompatible with that pose. So you can use FK to get things mostly in the right place, and then start using IK to make fine detailed moves, but when you switch to IK mode, it will bork into a weird position. This button just lets the system know that this pose is a good one.

Rigify Hands

Often I need the hands but I do not need the face. This means that you can’t just use the basic meta-rig because it contains neither. The solution is to just use the full human meta-rig and go to the armature data properties (little stick figure icon tab) and look for "Bone Collections". In with the rig selected in edit mode, make sure none of the bones are selected. Then go to the bone collection and click "Face" and then select. That should select all the face bones which you can then neatly delete. I go ahead and get rid of all three face related bone groups too.

One puzzle that took me a long time to solve was how to get the finger controls to do what they’re supposed to. When you do the full rig, you get very nice finger controls allowing you to do stuff like curling all the fingers by scaling the little finger paddle controls. The problem I always had was that the fingers would not curl in a sensible direction. Instead of all the finger tips curling to your guitar neck like you want, one might curl down the fret board in a painful looking way. How are these little finger control paddles actually aligned? At first I surmised it must be bone roll, but that doesn’t seem to be it. I believe the paddles' stems are aligned with the finger bone numbered 1 (3 is tip, 2 is back one, and f_ring.01.L is where a ring would sit). This is where the finger is pointing and the other two can only curl. The direction of that curl is determined by the alignment of the tip (e.g. end of bone f_ring.03.L) to the axis of finger bone 1 (the stem). If you make sure the tips are right in line with the first finger bone, they will curl the right way. Knowing this ahead of time can save trouble so that you don’t rotate the metarig in an awkward way to begin with. One good tip is that you can align the finger bones as they need to be by by doing something like setting Normal View and scaling the three finger bones of a finger on the X axis to 0.

The hand paddles are aligned by the alignment of the heads of the long arm bones. So how the arm bends is how those paddles will be laid out. This can be confusing for a t-pose where the arms are not bent to begin with. But giving them a little bending hint actually sets the paddle angle. The paddle size is the hand.L bone; I don’t know what else, if anything, it does.

The little controls shaped like slots opposite of the thumb on the back of the palm seem to control splaying the fingers. I don’t know what sets that up and how to do a good job of aligning that so that it works well, but it would be a nice feature if it were properly set up.

Rigify Problems

Note that after using Rigify, when you open a saved blend file you may get a "security warning" related to the file rig_ui.py. This official documentation indicates that this is a legit thing and controls the fancy UI that goes with the fancy rig. Best to enable this and let the scripts run.

Another similar question is what are WGTS_Rig objects? And, can they be deleted. The answer to that can be found here yet, I couldn’t understand it. Let’s just say that it might be possible to delete this.

Rigify Process

Here’s the process I used to make a fully rigged human arm. Note that I didn’t want the whole body rig, just the arm (think of Thing in the Addams Family).

  1. Might be wise to first do a [S]-s 1 to get the 3d cursor at the origin. [S]-a → Armature, and insert full Human (metarig) object. And if you aren’t needing fingers or a face, use Armature → Basic → Basic Human (Meta Rig) to massively simplify everything.

  2. Make damn sure everything is cleanly perfect with respect to transforms. Clear them all!

  3. Move the arm mesh to roughly match the meta rig pretty well. Scale your model (and clear it!) if needed — do not scale the metarig. Leave that pristine! So for example, if your arm had the shoulder at the origin, move it up to a metarig’s shoulder location. Clear that translation transform!

  4. With your model close to where the metarig is happiest, now adjust the metarig’s arm/finger bones to perfectly match the model. Try to keep all joints together that were together in the metarig. Do not delete any part of the metarig! Every part of it is needed to prevent errors. You will delete extraneous parts (e.g. fine face or hand controls) later.

  5. In Object mode with the metarig selected, hit the Generate Rig button in the armature’s Object Data Properties. Hopefully all the colorful controls of a fancy full human rig now show up.

  6. Delete the metarig — or move it 5m back out of the way, etc.

  7. Select your model first, and then the fancy rig object — in that order — and [C]-p to parent them with Set Parent To → Armature Deform → With Automatic Weights.

  8. Now it should all be good and working, but you have a bunch of other body parts with no real mesh on them. No problem. Just start selecting those in Edit mode — where they conveniently turn back into bone shapes for easy selecting — and get rid of them. I actually left the shoulder parented to the baseplate positioner. But I think there is some flexibility here and things will still pretty much work nicely.

Rigging Process

To do a simple rigging of a simple single object with a posable bone I’ll use the example of a rowing oar.

  • Starting in object mode, select the mesh of the oar.

  • Still in object mode, add and select the oar’s armature.

  • Line this new armature and its single bone up with the oar in the orthogonal position convenient for modeling. But the base (fat end) at the oarlock where the oar pivots.

  • Go to Pose Mode with [C]-tab and select the relevant specific oar bone in the armature (as opposed to the whole armature).

  • [C]-p to bring up the parenting menu - select "Bone".

  • Now, in Pose Mode, you can manipulate the oar bone and the oar mesh will follow.

  • Doing this will allow you to go crazy with waving the oar around for animating, but to resume work modeling the oar you can just go to Pose context menu → Clear User Transforms. Or you can use [A]-r and perhaps [A]-g to reset any rotations and translations (respectively) you may have introduced.

  • [A]-g - Remove bone movements.

  • [A]-r - Remove bone rotations.

  • [A]-s - Remove bone size changes.

Pose Problems

I have had problems being able to reset my rig back to the default pose. Sometimes when I want to start posing a rigged model I get "Cannot change pose when Rest Position is enabled." On the object data properties for the armature there is a button that says "Pose Position" and another that says "Rest Position". Clicking the first can just scramble the whole thing. So how to start with the rest pose as the first pose? In Pose Mode hit [Ctl+a] to bring up the apply menu which will look pretty different. Select "Apply Pose as Rest Pose". This has worked for me so that I could switch to "Pose Position" an have it not scramble.

Sometimes however this is not enough. Maybe the rest pose is fine and when you go to Pose Position button in the armature properties all hell breaks loose. How can you get the pose position to just chill out and be like the rest pose? The trick I’ve found is to first go to pose mode and use [a] to select all bones. Then use [alt+g], [alt+r], and [alt+s] (all three!), to basically do what the Pose → Clear Transform → All menu item does. With the pose transforms cleared, you’re left with the rest position when posed. Start again and enjoy!

Still having problems with this? Make sure you check all your bone constraints. I had some set on a bone that locked the rig into a weird pose position even when I tried to clear all the transforms to reset it to the rest position. At least turn off visibility on those modifiers to check if that’s causing problems.

Weight Painting

Sometime everything seems pretty good with the rigging process but when you animate your model, some part isn’t quite right. Maybe the crease in a joint looks weirdly folded or you modeled a hand too close to the body and moving the arms picks up the hips' mesh. To fix this, you need to weight paint the significance of each bone’s contribution to mesh deformation.

  • Click armature, then [shift]+LMB the mesh — this allows the Weight Paint mode to be selectable.

  • Under the Draw tools tab there is Options → Auto-normalize. This is important because the transition from what gets influenced to what does not has to add up to 1 somehow. Doing this manually does not even seem possible but I may be overlooking some interesting cases. Suffice to say, that enabling this keeps everything correct.

  • Under the tools tab there is a "Symmetry" section that can be a big help when sorting out arms and legs and such.

  • The armature (stick figure icon) object data properties → Viewport Display → Axes can be useful if you want to test rotations of, say a wrist but rotating it about X produces a random rotation. [ctl+r] can be used to modify how these axes are arranged on bones (an example). This allows you to test by doing the normal rotation commands and the three axes will be proper side to side, up and down, or twist. In general X should be the main perpendicular axes of the primary rotation. So for a wrist, rotating around X should be raising and lowering the fingers (palm facing down); the Z should be fingers side to side; the Y twist of the hand along the hand bone.

  • The smear tool is useful for pushing influence out of a wrong area. Same with the blur tool for even more of an effect.

  • Rigify is super easy and simultaneously super hard. To do weight painting with Rigify rig is very unintuitive. You need to turn on the deformation bones layer to properly see how the weights are assigned. In the armature object data properties → Skeleton → Layers there is a cryptic grid of dots. You must find the one that is labeled bpy.armatures["rig"].layers[29]. This will be the third from the right on the bottom row. Do not look for it under Protected Layers! Once those bones are on you can start seeing what bones go with what vertex group.

Rigify Custom Elements

The ://www.youtube.com/watch?v=Cq2Vw6EFXy0[CGDive videos] do a good job of explaining how to take advantage of Rigify elements to rig your own weird custom rig. He highlights critical but mysterious concepts and several quirky UI choices where important steps are quite deeply hidden. I’ll try to summarize those here so I can refer to them easier.

The first thing he notes is that there are "rig types" (aka "building blocks" or "components"). You can find this by generating a simple pre-built metarig and going to pose mode and selecting, say, upper_arm.L. Then under the bone properties tab (little bone icon), you should see a Rigify Type panel with a Rig type field. For a metarig upper arm bone, this should appear as limbs.arm. Note that forearm.L needs (and has) no value because it inherits from the connected parent; only the starting base bone of a "rig type" needs to be explicitly defined. To be in a rig "component" the child bones must be parented and connected. This happens naturally when you subdivide or extrude a bone, but disconnects can be the source of Rigify errors during rig generation if you’ve accidentally broken a chain while aligning the metarig. That’s the normal setup, however, there are exceptions like the limbs.leg.

To see the components you can select a rig in edit mode (only!) and go to the armature properties tab and look in the Rigify section for the Samples pulldown. There you can be reminded what is available and use the Add sample button to insert one into your rig.

These components can be stuff like…

  • spines.basic_spine - Can have as many segments as necessary. Ignore basic_ which means nothing. Also spines.super_spine is deprecated.

  • spines.super_head - Has (perhaps multiple) neck bone(s) and head bone at the end. Usually 3 bones, but could be more. Ignore super_. more.

  • spines.basic_tail - Ignore basic_. Needs to be at least 2 bones.

  • limbs.arm

  • limbs.leg - Must conform to the configuration of a chain of 4 bones with a disconnected heel.02.L bone parented to the foot.L bone (don’t know why there is an .02 in there but there is).

  • limbs.front_paw - I believe this has the same requirements as the leg.

  • limbs.rear_paw - Same as leg.

  • limbs.paw - Same as leg.

  • limbs.super_palm - Ignore super_.

  • limbs.super_finger - Ignore super_.

  • limbs.super_limb - Deprecated, ignore.

  • limbs.simple_tentacle - Used for animal paw in stock metarigs. Good for things like insect antennae.

  • limbs.spline_tentacle - Good for a more dynamic real tentacle as on an octopus.

  • faces.super_face - The full fancy face rig. There are other subcomponents related to faces like face.basic_tongue, face.skin_eye, face.skin_jaw, and perhaps more. Note that to use the faces.super_face you need to parent the face bone in the middle to the head bone of the spines.super_head. And yes, the category faces is different than face for the subcomponents. This is because the face.___ and skin.___ are the components to the newer modular system that is the object of the Upgrade Face option.

  • basic.copy_chain - Multiple bones tip to tail. I think. This is if you just have a simple linkage and you want it to be in the generated rig, this will set it up.

  • basic.super_copy - Simply copies a single bone (no assemblies, which should use basic.copy_chain) from the metarig to the final rig. This is used in the human metarig for the shoulder, breast, and pelvis.

  • basic.raw_copy - Advanced type that allows you to set up more complex rigs and have them transfer into the final rig at generation. Details hazy.

  • basic.pivot - Uses a single bone only. Related to using the raw_copy somehow. Details hazy.

  • experimental.super_chain - Obviously "experimental" but rumored to be potentially useful for cables, straps and other such things.

Adding A Prop To A Rig

A very common problem I have is that I need to add props like a sword, oars, ski poles, glasses, and so on. To put a tennis racket in a hand, for example, you can add a custom basic.super_copy to a Rigify metarig. Then when you generate the rig the racket will have its own bone.

Then put bone for the held item in place in edit mode, shift select the hand bone, and use [ctl+p] to parent with Keep Offset. You can change the metarig bone name to Sword or whatever and that should get copied too. It’s also nice to change the bone shape to something better than the default circle; go to the Bone Properties tab in pose mode and in the Rigify Type section you can select a Widget Type. Also it is advised to to to the Armature tab and look in the Advanced section to turn on the Overwrite Widget Meshes checkbox if you want to see your widget type shape actually update.

Once the final working rig is generated and you have parented the character mesh to it, you still need to parent the prop. The trick to doing this is that you have to select the tennis racket, then shift select the rig, then switch to pose mode, then select the sword bone of the rig and parent. Don’t forget that the parenting type is bone.

You can also skip making a special bone in the rig and just do a bone parent right to the hand bone. That may work for many things but if you have some kind of thing that moves around in the hand then the extra freedom of the objects own bone can be helpful.

Annotations

Annotations are very useful but a bit weird. They’re essentially a way to put stuff on your stuff. This makes one wonder why not simply use some more stuff? Why introduce a different kind of stuff for putting on stuff? The annotation system is a way to let you just free-form doodle pretty much anything anywhere in Blender at any time. This is fantastically useful for communicating with other people (and your future self!) subtle details about the Blender project itself.

The important forgettable thing to know is that holding down the [d] key makes LMB into a free-form doodle at the cursor.

Here are some more very useful keybindings.

  • [shift+d]+LMB - Stabilize drawing.

  • [alt+d]+LMB - Restrict to horizontal or vertical.

  • [shift+alt+d]+LMB - Line annotation mode. Click once holding all that down and it kind of enters line mode, i.e. you can let up on the [d] key and it’s still ready to draw lines. To get out of the mode, press [Esc]. Also in this mode you can hold [Alt] and it will constrain one of the axes in quirky but actually sensible way.

  • [d]+RMB - Erase annotations.

But to get a little fancier with it you probably want to open the [t] menu and click on the little pencil with blue line icon. With this tool permanently activated you can then see its properties in the [n] menu. Those properties allow changing colors and organizing doodles into different annotation "objects" (they’re not really normal Blender objects because they don’t show up in the outliner). To get straight lines, polygons, or the eraser mode, hold the LMB over the annotate tool icon in the [t] menu. Also [Shift+space] has an Annotate option that you can hold down for all the modes.

The Placement mode is very useful. Surface Placement will allow you to scribble on the face of some geometry, flat or complicated. View Placement will stick the doodle to the 2d screen regardless of how you orient your model in 3d. 3d Cursor Palcement just lines up your doodle to the current view and puts the depth at the 3d cursor; what "depth" exactly means is a bit complicated since the annotation is drawn after (on top of) the model by default; this can be changed clicking on the In Front icon in the [n] menu after clicking the annotation pull down (next to color).

I really find the default dull civil war blue to be a terrible color for highlighting things. This is easy to fix in Preferences → Editing → Annotations → Default Color. Problem solved.

Note that annotations use some of the same machinery as grease pencil features. Annotations can actually have onion skin properties like grease pencil — they probably use the same system.

2d Drawing With Grease Pencil

The grease pencil is a way to do fancy 2d animation based on a 3d model or in a 3d environment. Or not — these tools are powerful even if you don’t need 3d support. The intent, I believe, is to get a 2d drawn animation from a 3d modeled environment. This includes characters and rigging. All of it can be animated.

It might be helpful to recall what a "grease pencil" is in real life — they are (were?) weird pencil-like writing instruments that could write on difficult smooth surfaces. People used them to mark up things like glass (they seem to be called "china markers" today) and, if my recollection is not amiss, transparencies for overhead projectors which were basically PowerPoint machines before there was PowerPoint. They had a weird string which was used to "sharpen" them by tearing off layers of paper which composed their housing. I think these were traditionally used to mark up clear plastic animation frames (though I can not imagine them being used to execute the artwork). So the gist of the concept is that they write on smooth clear plastic and if you do a stack of such things just right, you’d have an animation. Again, it’s a bit odd to me to think of the grease pencil executing the artwork itself (it’s not permanent, smudges off, poor detail, etc.) but I think the much finer Blender tool evolved from a less fancy feature. I think of grease pencils as being a better skeuomorphic fit with Blender’s annotation tools (and I guess they’re kind of related).

Usage

To get started, you must add a grease pencil object in a normal 3d editing situation. Once you put that in your scene, you can change the mode to "Draw" and start playing around.

Note that you will not see the proper line thickness or color if you are in wireframe view mode — try solid.

Here are some interesting key bindings.

[f]

brush radius

[SH+f]

brush strength

[u]

change active normal material

[y]

change active normal layer

[i]

insert keyframe - which in grease pencil is like a blank canvas at that point in time. It will actually clear previous (in time) strokes, etc.

[h]

hide active layer

[SH+h]

hide inactive layers

[ALT+h]

unhide all layers

[w]

context menu

[ALT]

constrain strokes to vertical or horizontal

[SH]

Constrain strokes to vertical or horizontal on line tools. Or 1:1 aspect ellipses for circles and rectangles.

[SH+ALT]

Extends line the other direction too with line tools. Useful for box and circle.

[CTL]

change drawing to eraser tool

[CTL+ALT+LMB]

lasso tool erase

[TAB]

With pointer over the timeline will lock the current active layer.

[CTL+i]

Invert selection of selected keyframe dots on the timeline/dope sheet.

[SH+r]

Repeat the last action in timeline/dopesheet. Maybe useful for [SH+d] duplicating or [g] moving some awkward distance multiple times.

[1]

Only points in edit mode.

[2]

All stroke points in edit mode.

[3]

Select strokes in between other strokes in edit mode:

[l]

Select linked points which will basically be the complete stroke. Good for sculpting just one feature or use box or circle select to constrain what points can be sculpted.

[f]

Fills points with an intermediate segment.

[CTL+j]

Join points - not sure of the difference to [f], maybe separate objects?

[ALT+s]

Change stroke radius.

[u]

Activate curve editing mode. Basically puts bezier controls on curves. Also GUI control found to the right of the 1-2-3 control.

You can mark edges of a model and then in Object mode, go to the Object GUI top button and look for "Convert To" and "Grease Pencil". Go to the lower left menu ([F9] if you missed it) and look for "Only seam edges". You can adjust some other things too.

Enabling the addon Add Curve Extra Objects will give you a lot of things to make quick grease pencil items from. Stuff like arrows and stars.

With several grease pencil objects it can be confusing which one you’re working on. Check the outliner for a dot or an icon to the left of the grease pencil object name. That is where you can change which object is being worked on.

This information was so useful, I’m going to reproduce it here.

Organization Of Grease Pencil Concepts

Collections can contain grease pencil objects (fat squiggly snake icon). The grease pencil objects can contain a grease pencil data block (similar icon with endpoint vertex squares). The grease pencil data block generally contains layers, the basic two are by default called "Lines" and "Fills". Layers contain frames which are selected by the playhead placement. Frames are composed of the strokes and fills themselves which are composed of points (perhaps vertices). There is also a concept of channels which is a bit more mysterious to me — the manual provides this hint, "With Auto keyframe activated, every time you create a stroke in Grease Pencil object Draw Mode a new keyframe is added at the current frame on the active channel." This manual page also has hints about how it all works.

Here’s one workflow to animate a typical thing. The manual talks about redrawing new frames each time, but I find it helpful to just morph the previous position. This doesn’t have to be the previous frame. With the dopesheet grease pencil mode visible, make sure your "layer" is visible. The layer can be cross referenced over in the object data properties under Layers. If you use [i] to insert a new keyframe all of your previous artwork will be cleared; perhaps ok for starting at the first frame. Draw your lines in draw mode. Then select the keyframe on the layers channel (you must have the mouse focus on the dope sheet) and [Sh+d,x,5,Enter] to create a duplicate keyframe of this art 5 frames over. Then scrub over 5 frames (or whatever you need) to the next important position. Edit or sculpt your art into the new position. This will be mutating the geometry on this new copy of the keyframe. An advantage of this method is that you preserve perfect correspondence of all the parts. E.g. the leg in the last keyframe is the leg, drawn the same direction, in this one. This means that later, you can go back and interpolate and expect things to go pretty well.

The grease pencil data block tab — near the Materials thingy when a GP Object is active — allows you to add and remove layers if the ordinary "Lines" and "Fills" is not enough. I find that adding a GP Object with "Add → Grease Pencil → Stroke" starts you off with a quintessential stroke that is inevitably not what you really need; it seems quite ok to use "Blank".

Troubleshooting Grease Pencil Problems

If you’re not seeing the grease pencil strokes or they are faded out while you’re trying to create them there are many possible explanations.

  • Your reference image could be set "front" and not "back" obscuring your work, maybe by some background opacity.

  • In the grease pencil’s object data properties, there is a section called "Layers" with its own opacity setting that needs to be sufficiently visible.

  • Make sure you’re not on a neighboring frame with the onionskin effect on.

  • The "strength" of the stroke is a setting like brush size. It is found at the top of the screen near the brush radius in the interface while editing. It is set to .6 by default and this can be too weak. A pressure sensitive tablet pen can change this fluidly but RMB while editing lets you set it explicitly.

Note that to see what’s going on along the animation timeline, you need to have a dope sheet pane visible and you need to set its submode to "Grease Pencil"; only then will you see the keyframes that govern what grease pencil art is visible at any given time.

An important tip is to make sure you isolate the subject’s animation from its motion. This sounds like they are the same or similar or related but it’s best to decouple them. For example, if you have a character walking, it’s tempting to assume that in frame 2 the character will have moved forward because of the walking. But you must resist that temptation and animate as if the character was in a space suit and miming the action during a space walk with a black featureless background. Once the characters internal motion relative to itself is animated, only then should you provide the actual motion in context. This is relatively easy because you can just keyframe the object in object mode and it will all look sensible.

Animation

Animation Controls Overview

The animation system in Blender has an easy mode that is deceptively easy. You can set auto keyframes and move a thing around and hit play and it’s good. But behind the scenes all kinds of crazy stuff is going on and any animation that gets even slightly complex will start to be very confusing very quickly.

For example, in the timeline, new keyframes often do not appear when you might expect them. The crazy thing is that you need to use MMB to pan the display down because the keyframes are there, just panned up above the title bar. Crazy! That is a broken system! But be aware.

This video is an excellent introduction to the true nature of Blender’s animation system beyond dumbed down clutching at keyframes.

There are many different editor panels which are used in controlling animation.

  • Timeline - Preview the animation, i.e. actually watch it and see it happen visually. This means scrubbing controls, etc. It also is where automatic keyframes are activated and controlled. Again, note that keyframes should show up here and they often down because they are scrolled up and off the display. It is also useful for setting the rendered animation frame limits, which if you think about it is all about previewing — in this case, the render.

  • Dope Sheet - Detailed display of the keyframe data itself. Note that very often you’ll get something like "Object Transformation" and that looks like it. But it is normal that this one line has a little triangle on the far left — a triangle that points right. If you click it, you open up this line and the triangle points down. Then you see a lot more explicit data about all the things. Basically don’t be shy about digging around the items displayed for deeper levels.

  • Action Editor - This is actually a subcategory of the Dope Sheet editor (see the "Mode" pull-down there). It tends to show less detailed information than the normal Dope Sheet which shows a full action hierarchy. The Action Editor shows this action. It does have more helpful controls for singling out a particular action however (Note the "Browse Action to be linked." pull-down confusingly shown with the same icon as the Dope Sheet editor itself.

  • Graph Editor - What the dope sheet is for keyframe numbers, the graph editor is for the transition data. If the dope sheet says property-A is 10 at time 0s and 20 at time 3s, the graph editor plots the transition. This can be a custom curve or any kind of mathematical thing.

  • Nonlinear Animation - This editor is for playing with actions in a way that’s similar to playing with video clips in the video editor. You can get an action into this editor with the "Push Down" button in the title bar of the Action Editor. Once in the NLA editor, you can move it around in time and stack it with other actions. Different actions can even be combined in interesting ways. For example, if you have the following two actions: a swimmer crossing a river and a swimmer being carried down river by the current, you might combine them to get the swimmer on the correct trajectory. To do this highlight the higher strip (which by default replaces lower ones) and change the "Blend" mode in the right panel which, like the main 3d editor, is made visible with "n". Another CGDive video which does an incredibly good job explaining this whole mess in perfect detail. Bravo!

Try this video too.

Actions

In Blender there is an important but somewhat advanced concept (not necessary to know about for simple cases) called "actions". An action is where keyframes are really stored. Just like objects contain mesh data, actions can be thought of objects that contain keyframe data. Actions also contain curves which specify how the keyframes make the transition. This makes actions a kind of container that contains all the data needed to animate something.

Actions can be linked to objects and the properties that the action changes over time with its keyframes and transition curves will animate that object. You can apply multiple objects to one action. This is the reference count number that shows up in places. If the reference count drops to zero because you’ve unlinked all objects from an action, it will disappear when the animation system recomputes. To prevent this and save an arbitrary action disassociated from any particular object, you can make it have a "fake" object. This is like "phony" in Make and just keeps it around and valid though it doesn’t really have a real object target (yet).

To create this reference to a fake object you can "Save" an action by clicking the shield icon in the Dope Sheet editor’s Action Editor mode. Similarly, the X there is to delete or, technically, unlink actions. This is also where you can create a new action to work with.

Timeline

  • [A]-a - Play animation or if in progress, stop animation.

  • [A]-[S]-A - Play animation in reverse.

Sometimes there is an unlabeled orange bar near the bottom - just below the marker labels if there are any. This is showing "the cache". This seems related to Scene Properties → Rigid Body World → Cache. It seems that this line indicates which frames the simulation system starts and stops which is defined globally only once in the Scene Properties section just mentioned. The simulation start and end frame setting seem quite important; if the system isn’t even trying to simulate physics in the frames you expect, then things won’t work.

If your system came from elsewhere but is not behaving how you expect when you modify it, look specifically there at the "Bake" and "Free Bake" (un-bake) setting.

There can be confusion in Physics Properties → Rigid Body → Settings with the "Dynamic" and "Animated" checkboxes. The tricky thing is that you may need to "animate" the "Animated" setting — this means that you’d want the rigid body physics to be active during some part of the shot but not the whole thing. The confusing part is that "Animated" can kind of mean "not animated" because it means you’re animating "by hand" and not letting the physics simulation take over. It’s confusing enough to put keyframes on these checkboxes but keep in mind that "the Animation System" as shown in the tool tip for "Animated" means the normal one where you do the animating with keyframes, not the physics simulation.

Keyframe

  • i - insert keyframes. This is important to get the yellow line key frames to show up. Use i again for the next place. Also to interpolate values of a miscellaneous blender variable, you can hover over any slider bar and press i. Note that when editing the objects' position, this is not automatically updated on the keyframe and will revert (unless auto record is on); just make sure to press i after every pose edit.

  • [A]-i - Delete keyframe. Put current timeline frame on the desired keyframe and put the cursor over the main 3d window.

When the value fields in the properties menu show up yellow that means they are showing the value of a selected keyframe. When they show up green, that is one of the frames which is being interpolated. (So it seems to me.)

The red record button on the tool bar is to do automatic keyframes. With this engaged, LocRotScale changes are automatically assigned to the current time location as a keyframe.

Dope Sheet

To move all of the animations such that the effect is inserting more frames in the animation, you can go to the dope sheet and press a to select everything. Then press "g20" to move everything 20 frames forward.

Rigid Body Animation

The rigid body animation can position an object based on physics, stuff like falling and bouncing off things. The biggest point of confusion I had was on the "Animated" property. This property itself can be keyframed (animating the animating) and doing this is how you activate the computed physics action. What I found confusing was when you want the object to do animated stuff on its own, you must turn off "Animated". Then the RB physics can take over. N.b. that "Animated" means: you, the Blender user, will be animating this by hand - not the rigid body physics system.

Here’s the tool tip on the Animated property: "Allow rigid body to be controlled by the animation system." Since this is found on the Physics Properties → Rigid Body → Settings panel, you think it means the rigid body animation system. No - the opposite. It means the normal manually specified key frame animation system. So that’s quite confusing IMO.

Reference Images

When doing hand-drawn 2d animation it is very helpful (and not at all cheating!) to have some live action video of such action to use as a reference. In the outliner, select the camera and then in the properties panel (where materials and modifiers live) look for the camera settings. There you will find a section for "Background Images" which works as expected. You can even bring in video clips.

I found it helpful to use youtube-dl to bring in the video and then use something like this to remove the audio and crop it down.

ffmpeg -i YouTubeJumper.mkv -ss 00:01:34 -t 00:00:03 -vf "crop=950:550:400:350" -an jumpaction.mp4

The "-ss" is the start time and the "-t" is the duration of the clip, allowing you to isolate just a small set of frames. And the crop parameter allows you to just take a portion of each frame. The crop format is "crop=width:height:upperleftX:upperleftY".

If you want stills you can do that too by just having the output file look something like this: jumpaction%03d.jpg.

Physics

Cloth

  • Add a plane which will be the cloth.

  • In vertex edit mode, RMB to get the vertex context menu and select subdivide. F9 to change the divisions to something more like 64. Though if you’re previewing you might start smaller like 32 or even 16.

  • With the cloth plane selected in object mode go to the Physics properties and click the Cloth button.

  • Object collisions needs to be checked.

  • Self collisions might need to be checked if the cloth drapes over itself such as the corner of a table cloth.

  • In object mode, select any objects that will interact with the cloth and got to their Physics properties and click Collision. I’m not sure if you can do multiples at one time with this operation; doesn’t seem like it so don’t count on it. Also note that sharp edges can just poke through cloth, so if it’s easy to bevel sharp corners (as on the default cube) it’s good to do so.

  • You can Bake the cloth physics but this can take an impressively long time. So maybe start with a short sequence, for example have the start and end frame be 1-50 or so.

  • Make sure you set the frame range sensibly! You must do this in the Cache section of the Physics → Cloth. It’s not good enough to just do it in the timeline. If you mess this up and your 30 frame animation is set to bake for 250 frames, you can stop it with Escape (not exactly sure where the cursor needs to be).

  • The cloth plane can be given thickness with a solidify modifier.

Pinning

  • In edit mode select some vertices to pin.

  • Go to the Object Data Properties.

  • Click the + under Vertex Groups. A new group called Group is created. It might be a good idea to rename it VG_ClothPin or something.

  • With the new VG_ClothPin group highlighted and some vertices selected, click Assign. Now you have recorded these vertices in a vertex group.

  • Back in the Physics → Cloth properties, look for Shape which will contain a field for Pin Group. Select the VG_ClothPin group.

  • For hooks to other objects, go to Vertex → Hooks → Hook To New Object.

  • This will create an Empty. I think, not 100% sure. This empty seems like it can control the VG_ClothPin group.

  • Go to the cloth object and look at the modifiers and you might see a Hook modifier. Apparently it’s very important that this modifier be at the top of the stack, above the Cloth modifier which is often above the Subdivision modifier.

  • If you want a physical flagpole kind of object instead of just the invisible empty, you can create such an object and then select it and then select the empty. Then [C]-p to parent it using Object (Keep Transform).

Python

The documentation is mysterious and this seems an arcane topic, but this link to official documentation is extremely helpful. Or https://docs.blender.org/api/latest is easy to remember (you can also substitute latest with something like 3.2 to get historically specific).

This video tutorial is extraordinarily clear and patient covering many useful Python API concepts.

A good test tuple to start with is bpy.app.version Another good one — which will remind you that Blender carries its own completely separate Python installation — is sys.exec_prefix. Here’s what I’m showing.

/usr/local/src/blender-2.92.0-linux64/2.92/python

After looking at the quickstart guide you can start to appreciate the Python attributes and operators shown in UI hover tool tips. If these are not showing up, go to Edit → Preferences → Interface → Display → Tooltips → Python Tooltips.

Perhaps the best way to get a sense of what other more knowledgeable people are doing with Python in Blender, take a look at the source code for the addons. Here’s the typical place where mine live.

/usr/local/src/blender-3.2.2-linux-x64/3.2/scripts/addons

Enable Full Console Debugging

When doing stuff with Python, it’s good to have an "Info" pane open. By default this will show important messages that have registered to be shown. But if you want to see all possible messages you need to activate that explicitly with this.

>>> bpy.app.debug_wm= True

When developing, this is extremely useful.

Installing Dependency Modules

Of course fancy addons will want fancy modules. And since this is not system Python, it will appear that they are missing even when your system can use those modules just fine. You must first install pip in your Blender python.

cd /usr/local/src/blender-2.92.0-linux64/2.92/python
sudo ./bin/python3.7m -m ensurepip                   # Installs pip
sudo ./bin/python3.7m -m pip install --upgrade pip   # Upgrades pip
sudo ./bin/python3.7m -m pip install pyproj pillow numpy # Deps

Now it looks like GDAL is the real PITA. Sure Debian can get one no problem with apt install gdal. But things are tougher in the world of Blender. You have to go to this repackager’s repo and pick the right one. I guessed until I found one that worked. No idea what the compatibility parameters are.

W=https://github.com/AsgerPetersen/gdalwheels/releases/download/2.3.0_1/GDAL-2.3.0-cp37-cp37m-manylinux1_x86_64.whl
sudo ./bin/python3.7m -m pip install $W
sudo chmod -R a+X /usr/local/src/blender-2.92.0-linux64/2.92 # I had restrictive perms.

This also downgraded the Numpy. Why not? Go for it! After that, I could go to the Python console in Blender and import gdal came back with the goods.

Input

The one Python function that did not work which was a bit of an impediment for my purposes was the input("Prompt:") command (formerly known in Python2 as raw_input). I understand the limitation but what if you need to send user input to your Python activities?

This helpful tutorial has some answers. It says…

Because the user interface itself is written in Python and is designed to be extended and because virtually all internal data structures are accessible through a well documented interface it is pretty straight forward to implement add-ons that are written in Python.

Coordinates

From the Python console this will dump vector (object) coordinates for selected vertices.

[i.co for i in bpy.context.active_object.data.vertices if i.select]

Use i.index to get the vertex number instead of the coordinates.

Main Python Object Layout

bpy.data - Project’s complete data. bpy.context - Current view’s data. bpy.ops - Tools usually for bpy.context. bpy.types - bpy.utils -

from_pydata()

The from_pydata() function is a way to take Python data and make Blender data. Unfortunately, the documentation is perhaps hard to find and probably non-existent. However this is helpful. And this. And this.

Basically you need something like this.

import bpy
verts = [(1.0, -1.0, -1.0), (1.0, 1.0, 1.0),
        (-1.0, 1.0, -1.0), (-1.0, -1.0, 1.0)]
faces = [(0, 1, 2), (0, 2, 3), (0, 1, 3), (1, 2, 3)]
mesh_data= bpy.data.meshes.new("tet_mesh_data")
mesh_data.from_pydata(verts, [], faces)
mesh_data.update() # (calc_edges=True) not needed here
tet_object= bpy.data.objects.new("Tet_Object", mesh_data)
scene= bpy.context.scene
scene.objects.link(tet_object)
tet_object.select= True

In the from_pydata() function call, you need a list of vertices which are 3 member sets of floats. The second field is edges and the third is faces — you can only have one of the two. Leave the one you don’t use an empty list. The edge or face list is a list of sets connecting vertex indices. So an edge connecting the first vertex to the second would be [(0,1)]. There can also be quad faces with 4 values per set.

mesh_data.from_pydata(VERTS,EDGES,FACES)

Class Naming Conventions

It’s common to see some structured names in class definitions, for example in the examples found in /usr/local/src/blender-3.2.2-linux-x64/3.2/scripts/templates_py. These have the format of ADDONNAME_XX_class_name. The first part is whatever your addon is to kind of isolate the namespace you’ll be using. (Not sure what to think if you’re not exactly making an "addon" per se.) Next comes the type code and they are as follows.

  • HT - Header type

  • MT - Menu type

  • OT - Operator type

  • PT - Panel Type

  • UL - UI List (uh, type?)

Then comes the name of your class.

More details about this are described here though I could not find it in official docs.

Useful Functions

  • bpy.ops.mesh.primitive_plane_add()

  • bpy.ops.mesh.primitive_grid_add()

  • bpy.ops.mesh.primitive_cube_add()

  • bpy.ops.mesh.primitive_circle_add()

  • bpy.ops.mesh.primitive_cone_add()

  • bpy.ops.mesh.primitive_torus_add()

  • bpy.ops.mesh.primitive_ico_sphere_add()

  • bpy.ops.mesh.primitive_uv_sphere_add()

  • bpy.ops.mesh.duplicate()

  • bpy.ops.mesh.duplicate_move()

  • bpy.ops.mesh.delete(type=T) - VERT, EDGE, FACE, & more

  • bpy.ops.mesh.merge() - Not just for redundancy; makes square triangle.

  • bpy.ops.mesh.quads_convert_to_tris() - A tri isn’t subdivided, however.

  • bpy.ops.mesh.tris_convert_to_quads()

  • bpy.ops.mesh.edge_face_add()

  • bpy.ops.mesh.select_all() - In edit mode only and only the objects selectable.

  • bpy.ops.mesh.spin() - Surface of revolution.

  • bpy.ops.mesh.subdivide(number_cuts=N) - Edit mode. In a grid.

  • bpy.ops.mesh.unsubdivide() - Edit mode. In a grid.

  • bpy.ops.mesh.wireframe() - Interesting pseudo wireframes where edges turn into thin 4-sided sticks (as in chemistry).

  • bpy.ops.object.join()

  • bpy.ops.object.delete()

  • bpy.context.copy()

Creating Addons

The bl_info dictionary is defined outside of any classes and contains helpful information about the addon.

bl_info = {
    "name": "My Custom Addon",
    "description": "This is a custom addon that does something",
    "author": "Chris X Edwards",
    "version": (3, 1, 4),
    "blender": (3, 3, 0),
    "location": "View3D > Tools",
    "warning": "",
    "wiki_url": "",
    "tracker_url": "",
    "category": "Object"
}

The category field can be 3D View, Object, Modifiers, Mesh, Import-Export, maybe others.

Here is some code that creates a simple renaming addon. It creates a text box in the Object Properties menu to rename the currently selected object.

import bpy

class RenameObjectPanel(bpy.types.Panel):
    """Creates a UI element that allows the user to rename the selected object."""
    bl_label = "Rename Object"
    bl_idname = "OBJECT_PT_rename"
    bl_space_type = 'PROPERTIES'
    bl_region_type = 'WINDOW'
    bl_context = "object"

    def draw(self, context):
        layout = self.layout
        obj = context.object
        row = layout.row()
        row.label(text="Rename Object:")
        row = layout.row()
        row.prop(obj, "name")

def register():
    bpy.utils.register_class(RenameObjectPanel)

def unregister():
    bpy.utils.unregister_class(RenameObjectPanel)

if __name__ == "__main__":
    register()
#!/usr/bin/python3
bl_info = {
    "name": "ASCII PLY Points Exporter",
    "description": "Writes modifier points to a PLY file.",
    "author": "Chris X Edwards",
    "version": (0, 1, 0),
    "blender": (3, 2, 2),
    "location": "FILE EXPORT",
    "warning": "",
    "wiki_url": "",
    "tracker_url": "",
    "category": "Import-Export"
}
import bpy

class ExportPointsPLY(bpy.types.Operator):
    """Exports modifier points as a simple ASCII PLY file."""
    bl_idname = "export.points_ply"
    bl_label = "Points PLY"
    #bl_space_type = 'PROPERTIES'
    #bl_region_type = 'WINDOW'
    #bl_context = "object"

    def ply_output_header(self,vert_qty):
        """Prepares a simple ascii PLY header. Number of verts must be known."""
        return f"""ply
format ascii 1.0
comment author: Chris X Edwards
comment object: some points from a Blender modifier
element vertex {vert_qty}
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
end_header
"""

    def write_points_to_file(self,context):
        output_path = "/tmp/points.ply"
        output_array = []
        depsgraph = bpy.context.evaluated_depsgraph_get()
        camobj = bpy.data.objects["The_Object_With_Modifiers"]
        obj_with_verts = camobj.evaluated_get(depsgraph)
        modifier_hitpoints = [v for v in obj_with_verts.data.vertices if v.co[2] != 0]
        for cv in modifier_hitpoints:
            # Transform to world coordinates, i.e. all camera shots will line up.
            wv = camobj.matrix_world @ cv.co
            output_array.append(f"{wv[0]} {wv[1]} {wv[2]} 255 0 0\n") # Red.
        with open(output_path, "w") as f:
            f.write(self.ply_output_header(len(output_array)))
            f.writelines(output_array)

    def execute(self, context):
        self.write_points_to_file(context)
        return {'FINISHED'}

def menu_func(self, context):
    self.layout.operator(ExportPointsPLY.bl_idname)

def register():
    bpy.utils.register_class(ExportPointsPLY)
    bpy.types.TOPBAR_MT_file_export.append(menu_func)

def unregister():
    bpy.utils.unregister_class(ExportPointsPLY)

if __name__ == "__main__":
    register()

Video Editing

Although Blender is primarily a rendering tool, since the purpose of its rendering was envisioned to be for high quality 3d animations, it also is good at editing those animations. I have tried other video editors and found them to be very unstable with more than a few megabytes of material. Blender, on the other hand, has never failed me no matter what absurd thing I tried. The only limitation with Blender is understanding the millions of tools, options and settings. When a tool has so much functionality, it becomes difficult to simply read the manual which could take years and still hide the part you need among the stuff you’ll never care about.

Getting Audio To Work

It can be very frustrating to have everything ready to assemble, but no sound is playing. Here are some ideas about that. But what I had experienced was far more ridiculous.

I had to go to Edit → Preferences → System → Sound and change Audio Device from PulseAudio to None. Ah, but then comes the tricky bit. Then you have to go to Edit → Preferences → System → Sound and change Audio Device from None to PulseAudio. It’s obvious really, right?

Key Bindings

  • Hovering in the timeline area (not the sequencer), "s" and "e" will set the start and end frame to whatever the current position is.

  • "Home" in the preview window will make the image fit as well as it can.

Sequencer

  • Middle mouse button - Pan Sequencer area.

  • [C]-middle mouse button - Rescale Sequencer area.

  • Right mouse button - selects strips. Not left!

  • [S]-right mouse button - selects (or deselects) multiple strips.

  • Using the right mouse button to select a strip and then holding it down and moving a bit puts you in a move mode. You can let go of the right button and position your strips. When in the correct place, the left button with exit the move mode and leave the strips in the new place. Note that you can drop strips so that they overlap a bit and their box will turn red. When you place them, they will get auto positioned so that they are perfectly end to start.

  • Hovering over the Sequencer, "page up" will position at the end of the next clip. And "page down" will position the current frame at the beginning of the last clip.

  • "b" - in the sequencer start a selection "box" that can select multiple strips. Left clicks select the box.

Preparing New Blender For Video Editing

The first time you run Blender, there are probably some things you will want to adjust.

  • Click on the main graphics window somewhere to make the initial splash dialog go away. Now you’re looking at the "default layout".

  • Click the "layout drop down" button. Its tool tip is "Choose Screen layout" and it’s just to the right of the "Help" section of the main ("Info" - change with most top left icon) pull down menus. Choose "Video Editing".

  • This brings up the default Video Editing layout which contains these sections.

    • Video Preview Window - where the videos are shown.

    • Curve Graph Editor is to the left of the video preview window. Used to control complicated things like the speed of transitions, etc.

    • Video Sequencer - under the previous two areas is where video scheduling happens in a Gantt chart style.

    • Timeline - Useful for key framing.

  • The menus can be a little weird in Blender. For example, in the Graph Editor, the menu that controls it is below the graph display. Click the button to the left of "View" whose icon is a blue and white plot next to up and down arrows.

  • This brings up the major components menu. Change the Graph Editor into a Properties window by selecting "Properties".

  • In the Properties window, look for the "Dimensions" section and if it is open it should have a "Render Presets" menu. Use that to choose what kind of video you’d like to have. I chose "HDTV720p" for unimportant YouTube work, but "HDTV1080p" might also be good. Note that just below this menu, you should now see the resolution X and Y values that correspond to the preset you just chose.

  • Normal YouTube frame rate is 30fps. To the right of the X and Y dimensions is "Start Frame" and "End Frame". If you start at frame #1 and have 60 seconds of video at 30fps, what frame will you stop at? It’s the product of the two, 1800. If you know this ahead of time, adjust it now. If not, keep this in mind when it’s time to render.

  • Below the Start and End frame settings is the "Frame Rate" menu. You can change this to 30 or something else. One of the presets is "custom" so it doesn’t have to be a "preset" at all. Note that it is extremely wise to set this to be the same as your source video material.

  • Scroll down the Properties Window to the "Output" section. The default output directory is /tmp which is fine for many purposes, but if you’d like your Blender related files stored in a more sensible place, change this.

  • A bit below the output section is a menu where you can choose the output format. The default is set to "PNG" still images which is interesting to remember, but will require you to assemble a video file yourself. This is ok for short clips, but tedious for longer ones. Mikey suggests "Xvid". Unfortunately Xvid caused a lot of problems with seg fault crashing on rendering. Another possibly good choice would be "H.264" or whatever you think you’ll need. If a video you produce doesn’t work on the target you envision, return here to try different possibilities.

  • Next to the output type are two buttons "BW" and "RGB" which are both unselected. Unless you’re making an artsy black and white video, activate "RGB".

  • Go down to the "Encoding" area and open it if necessary. Go to "Presets" and choose "Xvid" here too (or whatever you’re using). This will then show up in the "Format:" pull menu nearby as selected.

  • Leave bit rate set to "6000".

  • Find the "Audio Codec" section. The default seems to be either "None" or "MP2". Mikey suggests "MP3" for videos with audio. Of course set "None" for silent videos. If you use MP3, change the bit rate to "192".

  • Make sure you choose a sensible container. Probably mp4 or avi and not matroska (unless you’re unconcerned if impoverished OS users never see it).

  • Back up at the top of the properties section, find the "Render" area and its "Display:" preset menu. Choose "Keep UI". Helps CPU usage during rendering. Just renders to a file.

  • Below the timeline area, look for the "Playback" control. That brings up a checkbox menu. Check the following.

    • Audio Scrubbing -

    • AV-sync - Make sure A and V are not misaligned.

    • Frame Dropping - drops frames to ensure smooth editor playback.

  • Go to "Info" section’s "File" menu and choose "User Preferences". Then select the "System" tab on the far right. Scroll down and look in the middle for "Memory Cache Limit". For 16GB systems a decent value is "10240" (add a zero to the default). Click "Save User Settings".

After you make all these initial changes, it is wise to not repeat the process every time you use Blender. Go to the main "Info" section’s "File" menu and choose, "Save Startup File". After doing that, you’ll be loading up Blender with your presets ready to go.

Importing Videos

  • Imports are placed at the current frame (green line in sequencer). So get that in the right place.

  • Use "Add" menu below sequencer. Select "Movie". Choose from the file browser.

  • Two strips from the file show up, an audio and a video.

    • Green - Audio

    • Blue - Video

Import Still Images As Video Sequence

This succinct video perfectly describes the process.

Import a series of images to include as a video.

  • Open a panel with "Video Editing" to get a video timeline.

  • Use "Add" bottom menu item and select "Image".

  • Hitting "a" selects all images. Select what is needed.

  • Press the button to right of path, "Add Image Strip".

Adjust frame rate.

  • Open a panel with "Properties".

  • Changing the "Frame Rate" setting there just changes playback speed.

  • To change the timing of this strip only select it.

  • Hit [S]-a to "add" something to this.

  • Select "Effect Strip".

  • Sub-select "Speed Control".

  • Look at the speed control effect strip’s properties on the right.

  • Check the box for "Stretch to input strip length.

  • To double the speed, you can change the "Multiply Speed" to 2.

Cutting

Often you just want to do a simple thing like cut off a bunch of stuff at the beginning and end that you don’t care about. The basic process is as follows.

  • Load the movie.

  • Select the one you want. Probably LMB these days (RMB in olden times).

  • Position the current frame where you want the cut. LMB on the timeline frame numbers works. And arrow keys.

  • Shift-K to make a "hard" cut. This makes two concatenated clips.

  • Select the end (if trying to cut off the end) or leave beginning clip selected.

  • Press DEL key and then confirm by left clicking the "Erase clip" message.

Adding An Image Or Static Overlay

Reasons for doing this might include the following.

  • Putting annotations on a video like YouTube used to allow.

  • Blocking out a certain part of the video.

  • Watermarking or branding of some kind.

The way to do this is to create an image separately.

  • Use the dimensions shown in the render presets to make an image the perfect size for overlaying. Note that you don’t have to go that big.

  • Use Gimp. Make sure the background is transparent where you want the video to show through.

  • Save the image and return to Blender.

  • Go to "Add" item on the sequencer menu.

  • Add an "image" select your file.

  • Position it and open it up a bit by dragging with the right button.

  • I found it easiest to match my entire scene by choosing the video I wanted it on, noting the frame start and length, then choosing the image and manually entering those so they match.

  • Go to the image properties menu on the right and change the "Blend" method to "Over Drop". This makes transparent parts show the video.

  • You can also adjust the offset (which is why you can get away with smaller images than the entire scene).

Split Screen

Similar to overlays, this technique can help present multiple simultaneous video streams. The perfect case example is trying to visualize a side by side comparison of two graphics cards. Assume that I take a video with card A called A.mp4 and a video with card B called B.mp4. I want to show the left half of A on the left of the screen and the right half of B on the right side of the screen.

  • "Add" both "movies".

  • Slide them around to align the content and trim the ends if needed.

  • With A selected, "Add" an "Effect Strip", "Transform".

  • With the green transform strip selected, go to "Strip Input" and check both "Image Offset" and "Image Crop".

  • Leave the offset at zeros but check the box.

  • For the crop, change the "Right" value to the width of the video divided by 2, e.g. 960 for 1920 wide (dimensions are helpfully listed under "Edit Strip" properties at the top). (Also make sure your overall render dimensions are as expected.)

  • Then at the top change the setting "Blend" to "Alpha Over".

That’s it for the transform strip. Make sure the transform strip is on top. The B strip needs to be visible, but you can turn off A’s visibility and just let the transform render what is needed from it.

Rendering

  • Save the project before attempting it! Actually save early and often, of course.

  • It might not be a great idea to render off of clips that are on flash drives. But it can be done.

  • Double check that Keep UI is set.

  • Choose "Render Animation" or Ctrl-F12 to start.

  • I got a lot of Segmentation faults when using Xvid. Better to use H.264.

Here are some settings hints that did work.

  • Display: Keep UI (!)

  • Preset: HDTV1080p Start with this (implies 1920x1080)

  • Video Codec: H.264

  • Container: MPEG-4

  • Medium quality

  • Medium speed

  • Audio Codec: MP3

  • Frame rate: 30fps (not 29.9whatever)

  • Anti-Alias: Mitchell-Netravali, 8, 1px

  • Output: FFmpeg video

Click Animation to begin render of the video.

Mikeycal’s Videos

It seems that a completely reasonable way to study video editing in Blender is to watch some videos on the topic edited therewith. The videos I found helpful were by "Mikeycal Meyers". The problem with the videos was that they were so comprehensive and patient that there are hours of material. That is a worthwhile exercise to initially learn Blender video editing, but after the first viewing, I found I needed a simple reference to the stuff he talked about. Besides providing a quick reference for cryptic key bindings, if I still have trouble, this list of what the videos contain can direct me to it. I commissioned my son to make the original list this is based on.

0 Introduction

No technical content.

  • History

  • Euros and dollars were equal in 2002

  • Blender was bought from someone else

1 Layout - Simple Stuff

  • Top left corner has drop down to select layout; to edit videos select the video editing option.

  • Replace curve graph editor w/ properties menu

  • Sequencer is where you put your videos

  • Properties window is important used for about everything

  • Set all default properties

  • Render presets

  • HDTV 1080p

  • For youtube use 30 frames per second

  • Use vlc to find FPS

  • Choose where rendered product goes, usually /tmp at default

  • Reset output format

  • xvid works best

  • Select rgb

  • Set preset to xvid

  • Set bitrate

  • Choose audio encoder to mp3

  • Set to Keep ui

  • Select audio scrubbing

  • Select AV-sync

  • Select Frame dropping

  • Save as startup file (preconfigured template) before any other steps

2

  • Channels are rows

  • Drag up for more channels

  • put cursor at frame 1

  • Click add and select type of media

  • Right click selects

  • Number on strips is different

  • Must be the same to be in sync

  • Select right frame rate

  • Strip/set render size

3

  • Right click is to drag and to select strips.

  • Import video.

  • Handles at front and back of strips to edit length.

  • Use cut tool to hard cut the strips.

  • Middle mouse (may not work).

  • Number on strips is # of frames.

  • Mouse wheel to zoom.

  • Home to see all of the strips.

4

  • Group select is B or Shift-right click.

  • Soft cut is when you drag the back.

  • Hard cut is Shift-K

  • Y key to constrain movement of strips to only between channels.

  • G key to constrain movement of strips to only along channels.

5

  • Channel 0 is above all

  • Higher the channel higher the priority

  • Don’t use channel 0

6

  • Can be either or resolution

  • Choose bigger resolution

  • If distortion after ^ then use image offset

  • Add/effect strip/transform

  • Transform makes whole new strip

  • Mute original