Quick Start Guide (GUI)¶
Get up and running with TraitBlender's graphical interface in just a few minutes! This guide will walk you through creating your first museum-style specimen image using Blender's user interface.
Alternative Access Methods
This guide covers the GUI interface. TraitBlender can also be accessed programmatically via Python API. See the Quick Start (API) guide for code-based workflows.
Prerequisites¶
Before starting, ensure you have:
- ✅ Blender 4.3.0+ installed
- ✅ TraitBlender add-on installed (Installation Guide)
Step 1: Enable TraitBlender
- Open Blender
- Go to Edit → Preferences → Add-ons
- Search for "TraitBlender"
- Enable the checkbox next to TraitBlender
- The TraitBlender panel should now appear in the 3D Viewport sidebar (press
Nif hidden)
If you want TraitBlender to be activated by default when opening Blender, you must save your preferences after activation with Edit → ☰ → Save Preferences.
Step 2: Open the Museum Scene
- In the TraitBlender panel, find the "1 Museum Setup" panel
- Click the "Import Museum" button
- After a moment, the scene will load with a museum table, lighting, and camera setup
- Press
0on your numpad (or0with Emulate Numpad enabled) to switch to Camera View - You should now be looking through the camera at the empty mat lying on the table
Emulate Numpad
If you don't have a numpad, go to Edit → Preferences → Input and check Emulate Numpad. This allows you to use the number keys at the top of your keyboard as a numpad.
Step 3: Configure the Scene
Collapse the Museum Setup panel and expand the "2 Configuration" panel.
The configuration system stores all scene settings (camera, lighting, world, materials, etc.) as a structured PropertyGroup at bpy.context.scene.traitblender_config. The Configuration Panel provides a visual interface for these settings.
In this panel, you'll see several collapsible sections:
- World - Background color and lighting strength
- Camera - Position, rotation, focal length, resolution
- Lamp - Lighting position, color, power, and properties
- Mat - Specimen mat color, position, and scale
- Render - Render engine settings
- Output - Image output format and directory
- Metadata - Information to include in rendered images
Click the arrow next to any section to expand it and adjust settings. Changes are immediately reflected in the scene.
Viewing and Exporting Configuration¶
- Show Configuration: Click this button to see all configuration values in a popup window
- Export Config as YAML: Export your current configuration to a YAML file for reuse
Importing Configuration¶
- In the Museum Setup panel, enter a path to a YAML config file in the Config File field
- Click Configure Scene to load the configuration
Scene Requirements
Configuration import only works if the museum scene has been loaded (via Import Museum). If you encounter issues, click Clear Scene and re-import the museum scene.
Step 4: Select a Morphospace
Expand the "3 Morphospaces" panel.
TraitBlender generates 3D specimens from mathematical models called morphospaces. By default, TraitBlender includes the CO_Raup morphospace, which combines: - The morphospace developed by Contreras-Figueroa and Aragón (a generalization of Raup's classic shell space) - Okabe and Yoshimura's allometric shell thickness function
This creates fully 3D models of molluscan shells with realistic geometry.
Select CO_Raup from the dropdown (it should be the only option by default).
Step 5: Import and Manage Datasets
Expand the "4 Datasets" panel.
Import a Dataset¶
- Click the folder icon next to Dataset File
- Select a CSV, TSV, or Excel (.xlsx, .xls) file containing your morphological data
- TraitBlender will automatically:
- Detect the species/label column (looks for: species, label, tips, name, id, etc.)
- Move the species column to the first position
- Load the data into the system
Dataset Format¶
Your dataset should have: - First column: Species/specimen names (automatically detected) - Remaining columns: Morphological traits that map to morphospace parameters
For CO_Raup, the parameters include: b, d, z, a, phi, psi, c_depth, c_n, n_depth, n, t, time_step, points_in_circle, eps, h_0, length
Generate a Specimen¶
- After importing, select a specimen from the Sample dropdown
- Click Generate Sample
- A 3D shell will be generated and placed at the center of the table
- Use the Sample Controls section to adjust the specimen's position and rotation
Edit Dataset¶
Click Edit Dataset to open the CSV viewer (DearPyGui). This allows you to: - View and filter your data - Sort by columns - Edit values - Export changes back to TraitBlender
Step 6: Adjust Orientations
Expand the "5 Orientations" panel.
This panel controls how specimens are oriented relative to the table. The orientation system is designed to help align specimens consistently, similar to Procrustes alignment for 2D images.
Work in Progress
The orientation system is currently under development. Basic positioning works well using the table coordinate system (tb_coords), but automatic alignment for consistent specimen rotation is still being refined.
Step 7: Configure Transforms
Expand the "6 Transforms" panel.
Transforms allow you to add statistical variation to your images for data augmentation. This is useful for training neural networks or simulating real-world imaging conditions.
Building a Transform Pipeline¶
- Select Property: Choose a configuration section (e.g., "world") and a property (e.g., "color")
- Select Sampler: Choose a statistical distribution (uniform, normal, beta, gamma, etc.)
- Add Transform: The transform is added to the pipeline (currently requires Python API - GUI coming soon)
Available Samplers¶
- uniform: Uniform distribution between low and high
- normal: Normal (Gaussian) distribution
- beta: Beta distribution
- gamma: Gamma distribution
- dirichlet: Dirichlet distribution (for color vectors)
- multivariate_normal: Multivariate normal distribution
- poisson: Poisson distribution
- exponential: Exponential distribution
- cauchy: Cauchy distribution
- discrete_uniform: Discrete uniform distribution
Running the Pipeline¶
- Run Pipeline: Execute all transforms in sequence
- Undo Pipeline: Revert all changes back to original values
Transform GUI
A full DearPyGui interface for building transforms is in development. Currently, transforms can be built programmatically via Python API or by editing the YAML configuration directly.
Step 8: Render Images
Expand the "7 Imaging" panel.
- Configure your output settings in the Configuration panel (Output section)
- Click Run Imaging Pipeline to render images
The imaging pipeline will: - Apply any configured transforms - Render from the current camera view - Save images to the specified output directory - Include metadata if configured
Next Steps¶
Congratulations! You've completed the basic workflow. Now you can:
- Explore the API: (Quick Start API)
- Customize Configurations: Create and save YAML configs for different imaging setups
- Build Transform Pipelines: Use the Python API to create complex data augmentation pipelines
- Work with Multiple Specimens: Process entire datasets programmatically
For more detailed information, see: - Configuration Files - Understanding YAML configs - API Reference - Complete API documentation - Tutorials - Advanced techniques and workflows