See the PDF documentation for full details and images.
How to use the speed-up?
If you render with only your CUDA GPU(s), it will render faster out of the box by making the path tracing phase 1.5x to 2.8x faster on average.
How to use the persistent data option and what does it do?
In 2.79x, in the performance panel, the persistent image option was replaced by persistent data. In scenes where only the camera moves through the scene, this option can dramatically speedup your overall render time by only doing the pre-processing step once for the whole animation. Particularly useful in complex scene with long pre-processing times.
This option will be ported to 2.80 when 2.80 is stable enough, so around the official stable release probably.
What if I want to use E-Cycles with CPU or OpenCL?
E-Cycles is 100% compatible with Blender, so it will work, but your mileage may vary and only CUDA GPUs are officially supported. However, some users already use E-Cycles with CPU or OpenCL and had good results, mostly to use the new AI denoiser, the new presets and the new options like the better AO simplify, dithered sobol or scrambling distance. Here are some tips that ensure the best experience in those case:
If you want to render with your CPU too, deactivate auto tile size and use a tile size of 16x16 or 32x32. In this case, you should use the E-Cycles AI-Denoiser as it's fast with small tile size too.
For OpenCL the speed-up out of the box is around 10-20%. Users had some good speed-up using the presets. Use the AI denoiser with the fast preset and divide your sample count by 2 as the AI denoiser works well with low samples too. See here for example for a usage of E-Cycles with OpenCL. OpenCL is not officially supported.
The new preset system
In both 2.8 and 2.79 versions, there is now a new preset system. The UI is slightly different. In 2.79x, it’s a drop down menu, in 2.8, it’s a small icon with 3 horizontal lines on the top left. In both case, you just have to select the quality level you want. Most of the time, using lower quality can also be paired with much lower sample count as both the scrambling distance and the AO Bounces approximation reduce the overall noise. The best way to know is to start a render in preview or progressive mode after choosing the preset and note the sample count when the noise level is good enough for you.
If you use the AI denoiser with fast presets, you can most of the time get good results with 100 to 200 samples. For higher quality presets, 400 to 1000 may be required. Without denoiser and using the physically correct preset, you may need 4000spp for interiors.
You can use the new presets and/or parameters to either render faster or render in the same time with more realistic values.
Filmic log encoding
To start blender with filmic log encoding, use the blender_filmic launcher near the blender executable in your E-Cycles folder (it will be a .bat on windows and a .sh on Linux/Mac). Then in your scene, you can choose the log encoding in your colour management panel.
The best explanation about it has been made by Bartek in his Blender Conference presentation available here https://www.youtube.com/watch?v=kVKnhJN-BrQ. It’s kind of the equivalent of a RAW photo for digital cameras. It saves the whole dynamic range in the best way for high quality post production. So if you want to tweak your render outside of Blender, it’s the way to go. Select none as look and log encoding for the view before saving.
The new AI Denoiser add-on
Being an add-on, it needs to be activated in 2.79x (in 2.8x, it is activated by default thanks to a new feature). In 2.79, you can do so by going to File → user preferences → add-on Tab → type “AI denoi” in the search field and activate the “AI denoiser add-on”. Then click on “save preferences” to have it on by default.
The AI denoiser panel is at the bottom of the render panel. First, choose the quality level you want between 1 and 3. 1 is fast, 2 is a balance between speed and quality, 3 is very high quality, but it uses much more render passes, increasing memory usage by a factor around 2. You may need more system RAM to use this level. It will also slightly increase the GPU memory usage for the extra render passes. To reduce memory usage and render time, you can deactivate SSS and/or transmission denoising if you don’t have those in the scene. If you deactivate those and some materials using SSS or transmission are present in the scene, you will get a warning to help you to correct the settings.
The AI denoising happens in the compositor, after the rendering. It means you can use it for anything, including bakes for example. The add-on and it’s panel automates the process of creating a node tree for you.Before using this add-on, it is recommended to open a node editor set on compositor in 2.79x or a compositor editor in 2.8x to see what happens and how it works.
If your scene didn’t have any compositing before, you just have to click on “Create”. If you modify the settings, “Regen” will update the node tree using your new parameters.
If you have multiple “render layers” nodes already in your node tree, it will add the needed denoising node to the active one, so you should first select the node of the render/view layers you want to denoise and then click on generate.
There is a mix node created automatically, which mix the noisy and the denoised image. By default, it is set so that only the denoised image is visible. If you want to keep a bit of noise, you can reduce the mix factor. Lower values give more noise. Contrary to the old denoiser, it doesn’t need to re-render the frame, but the denoising process may take some time to update the image.
The low memory options are not finished polished right now. It will set the number of CPU thread to 1 to reduce memory usage, but also will make the denoising slower and if let like that, will also make the pre-processing time slower. So if you need this option for high resolution renders, think to set the threads in the render → performance panel back to auto before re-rendering for optimal performance.
Tips to increase render speed:
minimize the Blender window when rendering, it can make rendering much faster.
if you have multiple GPUs, prefer using the AI denoiser instead of the included in Blender. Blender's denoiser currently has a bug which makes it very slow with multi-GPU setups.
in the user preferences → system, disable your CPU in CUDA. Only if your CPU is much faster than your GPU (for example threadripper with mid range GPU) and you render at high sample count will it be better with CPU and GPU. All other cases (low spp with denoising, high end GPU, etc.) will be better with only your GPU(s) activated.
Tips to save memory:
1) Render from the command line for big stills or animations.
You can do so by:
- opening a terminal (hit windows key -> type "cmd", hit enter),
- drag and drop your blender.exe onto it and type -b
drag and drop the .blend file to render, then type either -f 1 for example for frame number 1 or -a for animation (all frames)
It should give something like c:\path\blender.exe -b d:\path\file.blend -f 1
2) When using the level 3 of the AI denoiser and very high resolution, you can set the threads in the performance panel to manual and 1 to divide memory usage by 2. It will also make the preprocessing steps slower as it will only use on thread, so use this only if required.
You can also save the passes in a multi-layer exr, then open another Blender instance to denoise it in the compositor. It will free your RAM from the 3D Data and let more space for denoising.
You can combine both solution to denoise very high resolution images.