Houdini Tutorial Week 4

This week, we’re focusing on Houdini’s materials, lighting and rendering features.

Linear workflows:

Linear workflow has become the industry standard that most studios have adopted because of the power and flexibility. It’s much easier to adjust the level of reflectivity of objects or change the base color of a single model part in Photoshop instead of fine-tuning and 3D rendering. This is what the PixelSquid product offers the end-user a great deal of flexibility. Each layer or element rendering contributes to the final image. For this reason, PixelSquid content must be generated in a linear workflow. The diagram below shows the basic flow of a linear pipeline.

Another way to look at it is as a series of curves applied to different inputs and outputs. If you’re working in a linear pipeline, you’ll need to correct certain types of images so that they output the correct result. When a bitmap created as an sRGB image enters the pipeline, it needs to be inverse gamma corrected to work properly. Then, when you output the renders, they’ll need to be corrected to appear as expected on your monitor. Working this way will ensure that the math adds up correctly.

Another way to look at it is like a series of curves applied to different inputs and outputs. If you’re working in a linear pipeline, you’ll need to correct specific images to output the correct result. When a bitmap created as an sRGB image enters the pipeline, the inverse gamma needs to be corrected to work correctly. When you output the renders, they need to be corrected to appear as expected on your monitor. Working this way will make sure that the math adds up correctly.

Linear lighting and color:

High-dynamic-range imaging:

High-dynamic-range imaging (HDRI) is a technique used in photographic imaging and film and in ray-tracked computer-generated imaging to reproduce a wider range of luminosity than is possible with standard digital imaging or photographic techniques. Standard techniques allow for differentiation only within a certain range of brightness. There are no visible features outside this range because, in the brighter areas, everything appears pure white and pure black in the darker areas.

The ratio between the maximum and the minimum total value of the image is known as the dynamic range. HDRI is useful for recording many real-world scenes that contain very bright, direct sunlight to extreme shade or very faint nebulae. High-dynamic-range (HDR) images are often produced by capturing and then combining several different, narrower ranges of exposures of the same subject.

The two primary types of HDR images are computer rendering and images resulting from the merging of multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. You can also purchase HDR images using special image sensors, such as an oversampled binary image sensor. Due to printing and display contrast limitations, the extended luminosity range of HDR input images must be compressed to make them visible.

The method of rendering an HDR image to a standard monitor or printer is called tone mapping. This method reduces the overall contrast of an HDR image to make it easier to display on devices or printouts with a lower dynamic range and can be used to produce images with preserved local contrast (or exaggerated for artistic effect).

“HDR” may refer to the overall process, the HDR imagery process, or the HDR imagery shown on a low-dynamic display such as a screen or a standard.jpg image.

Displacement Rendering:

Displacement maps can be an excellent tool for adding surface detail that would take too long using regular modeling methods. Displacement mapping differs from bump mapping in that it alters the geometry and therefore has the correct silhouette and self-shading effects. Depending on the type of input, the displacement can occur in two ways: Float, RGB & RGBA inputs will displace along with the normal input while the vector input will displace along the vector.

The example above shows how a simple plane, with the addition of a displacement map, can produce an interesting, simple-looking scene.

You should ensure that your base mesh geometry has a sufficient number of polygons. Otherwise, there may be subtle differences between the low-resolution mesh and the high-resolution mesh from which it is generated.

https://docs.arnoldrenderer.com/display/A5AFMUG/Displacement

Displacement Maps:

When it comes to creating additional details for your low-resolution mesh, the displacement maps are the king. These maps physically move (as the name implies) the mesh to which they are applied. In order to create details based on a displacement map, the mesh must usually be subdivided or truncated so that real geometry is created. The great thing about displacement maps is that they can be either baked from a high-resolution model or hand-painted. A displacement map, like a bump map, consists of grayscale values. Here’s the real kicker, though. While you can use an 8-bit displacement map, you will almost always experience better results by using a 16-bit or 32-bit  displacement map.

While 8-bit files may look good in 2D space, they can sometimes cause banding or other artifacts due to a lack of value. Now, this isn’t that great about displacement maps. Creating all that additional geometry in real-time is extremely difficult and difficult for your system. As a result, most 3D applications calculate the final displacement results at the time of rendering. Compared to bump or normal maps, a displacement map will also add a significant amount of time to your renders. As a result of this additional geometry, the results of the displacement map are hard to beat. As the surface is modified, the silhouette reflects the additional geometry. Before you decide to use one, you should always weigh the displacement map’s cost against the added benefit.

Normal Maps

Normal maps may be referred to as a newer, better type of bump map. As with bump maps, the first thing you need to understand about normal maps is that the details they create are also fake. There is no additional resolution added to the geometry of your scene. In the end, a normal map creates an illusion of depth detail on the surface of a model, but it does it differently than a bump map.

As we already know, the bump map uses grayscale values to provide up or down information. The normal map uses RGB information that corresponds directly to the X, Y and Z axis in 3D space. This RGB information tells the 3D application the exact direction in which the surface normals are aligned for each polygon. The orientation of the surface normals, often referred to as normal, tells the 3D application how the polygon should be shaded. In learning about normal maps, you should know that there are two completely different types of maps. When viewed in 2D space, these two types look completely different.

The most commonly used is called the Tangent Space Normal Map and is a mixture of primarily purple and blue. These maps work best for the mesh that has to be deformed during the animation. Tangent Space Normal Maps are great for characters like stuff. For assets that do not need to be deformed, a normal Object Space map is often used. These maps have a range of different colors as well as slightly improved performance over Tangent Space maps. There are some things you need to be aware of when you consider using a normal map. Unlike a bump map, these types of maps can be very difficult to create or edit in 2D software such as Photoshop. You’ll probably bake a normal map with a high-resolution version of your mesh.

However, there are some exceptions for editing these types of maps. For example, MARI can paint the type of surface information that we see in a normal map. Normal maps are pretty well integrated into most pipelines when it comes to supporting. In contrast to the bump map, there are exceptions to this rule. One of them would be the mobile game design. Hardware has only recently evolved to the point where mobile games are starting to adopt normal mapping into their pipelines.

Bump Maps:

Bump maps create an illusion of depth and texture on the surface of a 3D model with computer graphics. Textures are artificially created on the surface of objects using grayscale and simple lighting tricks, rather than creating individual bumps and cracks manually.

A bump map is one of the old types of maps we’re going to look at today. The first thing you need to understand about bump maps is that the details they create are fake. As a result of a bump map, no additional resolution is added to the model. Typically, bump maps are grayscale images limited to 8 bits of color information. It’s just 256 different colors of black, gray or white. These values are used in the bump map to tell 3D software two things. Up or down, man.

When the bump map values are close to 50% gray, there is little to no detail on the surface. When values get brighter, working their way to white, the details seem to pull out of the surface. By contrast, when values get darker and closer to black, they seem to be pushing the surface. Bump maps are great for creating tiny details on a model. For instance, pores or wrinkles on the skin.

They’re also relatively easy to create and edit in a 2D application like Photoshop, considering that you’re just using grayscale values. The problem with bump maps is that they break easily if viewed from the wrong angle by the camera. Since the detail they create is fake and no real resolution is added, the geometry silhouette applied to the bump map will always be unaffected by the map.

OpenEXR

OpenEXR provides the specification and reference implementation of the EXR file format, the film industry’s professional image storage format. The format’s purpose is to accurately and efficiently represent high-dynamic-range, linear image data and associated metadata, with strong support for multi-part, multi-channel use cases. The library is widely used in host application software where accuracy is critical, such as photo-realistic rendering, texture access, image composing, deep composing, and DI.

ACES:

ACES is a color system designed to standardize how color is managed from all kinds of input sources (film, CG, etc.) and provide a future-proof working space for artists to work at every stage of the production pipeline. Whatever your images come from, you’re smooshing them to the ACES color standard, and now your entire team is on the same page.

ACEScg color gamut is a great advantage for CG artists, a nice big gamut that allows for a lot more colors than the old sRGB. Even if you’re working in a linear colorspace with floating-point renders, the so-called “linear workflow,” your primary color (which defines “red,” “green,” and “blue”) is probably still sRGB, which limits the number of colors you can accurately represent.

Working with render nodes:

We can go to [Rend View] to see the rendering screen. The effect of material nodes and shaders on the model can be observed in [Rend View] in real-time. Clicking on [Render] will make it possible for us to render in real-time.

Create [Light] in [Object] and re-test the rendering.

Mantra

Mantra output driver node uses mantra (Houdini built-in renderer) to render your scene. You can create a new mantra node by selecting Render option Create a render node from Mantra in the main menus. You can edit the existing render node with the Render License Edit render node with the node’s name. To see the actual network of driver nodes, click the path at the top of the network editor pane, and choose Other networks that are not supported.

You can add and remove properties from the output driver just as you can for objects. If you add object properties to the rendering driver, all objects in the scene are defined by default. Select the render node, click the Gear menu in the Edit parameter editor, and choose Edit the rendering parameters to edit the node’s properties. See Properties for more information on properties. See the Edit Parameter Interface for more information on adding properties to a node.

For complex scenes involving multiple rendering passes, separate lighting and shadow pass, and so on, you can set up dependency relationships between rendering drivers by connecting the driver nodes. See the rendering of dependencies.

https://www.sidefx.com/docs/houdini/nodes/out/ifd.html

Karma solaris

It’s a new render engine, it’s only used in USD, and it’s still a better one for now. Karma Solaris didn’t use a lot that our teacher said nobody used in production because it’s very recent.

Render region tool:

https://www.sidefx.com/docs/houdini/props/mantra.html

How to create or set up a Houdini camera:

https://www.sidefx.com/docs/houdini/render/cameras.html

Environment Light in Houdini:

Environment lighting adds light to the scene as if it had come from a sphere surrounding the scene. Usually, the light is colored using an image called an environment map. The environment map can match the lighting (and reflections) of the scene to the real world’s location, or it can be used to add interesting variations to the lighting of the scene.

Realistic environmental lighting is cheap to render in a PBR rendering setup.

A typical PBR lighting setup will use the environment light to create the base light level and the area lights to represent a motivated light source.

Recommended use of HDR texture resource sites:

https://hdrihaven.com/

Import the HDR image file in the [Environment Map] node of [Environment Light]. This will add the lighting effect to the environment under the influence of the HDR image environment. (This will make the model render more natural)

The [Area Light Options] value in the [hlight] node affects how soft or hard the shadow is.

Video:

8 bit, 10 bit, 12 bit.. What do these mean?

8 bit color:

In TV’s, each individual value represents a specific color in a color area. When we talk about 8 bit color, we’re basically saying that TV can represent colors from 00000000 to 11111111, a variation of 256 colors per value. Since all TVs can represent red, green, and blue values, 256 variations of each mean that TVs can reproduce 256x256x256 colors or 16,777,216 colors in total. This is considered a VGA and has been used as a standard for both TVs and monitors for many years.

With the advent of 4K HDR, we’ll be able to push a lot more light through these TVs than ever before. We need to start representing more colors, as 256 values for each primary color will not reproduce almost as lifelike images as something like 10 or 12 bits.

10 bit color:

With the advent of 4K HDR, we’ll be able to push a lot more light through these TVs than ever before. We need to start representing more colors, as 256 values for each primary color will not reproduce almost as lifelike images as something like 10 or 12 bits.

10 bit color can represent between 0000000000 to 1111111111 in each of the red, blue, and yellow colors, meaning that one could represent 64x the colors of 8-bit. This can reproduce 1024x1024x1024 = 1,073,741,824 colors, which is an absolutely huge amount more colors than 8 bit. For this reason, many of the gradients in an image will look more more smooth like in the image above, and 10 bit images are quite noticeably better looking than their 8-bit counterparts.

12 bit color:

12 bit color ranges from 000000000000 to 1111111111, giving this color scale a range of 4096 versions of each primary color, or 4096x4096x4096 = 68,719,476,736. While this is technically a 64x wider color range than even 10-bit color, a TV would have to produce images bright enough actually to see the color difference between the two.

Light object node:

Light Objects are objects that shed light on other objects in a scene. With the light parameters, you can control the color, the shadows, the atmosphere and the quality of the light-lit objects. Lights can also be viewed and used as cameras (Viewport > Camera menu).

https://www.sidefx.com/docs/houdini/nodes/obj/hlight.html

Lights and shadows:

https://www.sidefx.com/docs/houdini/render/lights.html

[Point]:

[Line]:

[Grid]:

[Disk]:

[Sphere]:

[Tube]:

[Distant]:

[Sun]:

Material and Arnold

You can create a [grid] node on the [Object] screen and then go to the [grid] node. Use the [material] node to connect to the [grid] node. The material can be selected from the [Group] section of [material].

You can also adjust the material directly to the [grid] node on the [Object] screen. Select the [grid] node, then select the material from [Material] under the [Render] property bar.

Principledshader:

https://www.sidefx.com/docs/houdini/nodes/vop/principledshader.html

Principledshader(in mat interface):

https://www.sidefx.com/docs/houdini/nodes/shop/principledshader.html

Recommended websites:https://opencolorio.org/

Arnold for Houdini user guide:

https://docs.arnoldrenderer.com/display/A5AFHUG/Sampling

Download the OCIO:

1. Download the zip code and unzip it in the full English directory.

2. Open [Edit System Environment Variables] and choose [Environment Variables].

3. Select [New].

4. Type in [OCIO], select the file you want to add and find the installation file’s latest version. (Use [config.ocio] from ACES 1.2 for this time)

5. Click on the [Confirm] button. Now re-open Houdini and we’ll see the [Mantra Render View] installed.

Download the ACES 1.2 at:

https://github.com/colour-science/OpenColorIO-Configs/tree/feature/aces-1.2-config

[arnold]:

[arnold_light]:

[uvproject]:

Particle rendering test:

Create [Arnold] in the [out] module first.

Then create [arnold light] on the [obj] screen. Set the [Light Type] property of [arnold light] to [Skydom]. Set [Color Type] to [Texture], then add texture to the HDR environment.

Please add the [Arnold] node to the [mat] screen.

Because crag models initially have mantra materials, we need to add [attribdelete] to delete the model’s initial materials.

After deleting the crag model’s initial material, connect the [material] node to make the [Arnold] material for the model. (In the [material] node, connect the [Arnold] node to the crag already created on the [mat] screen.)

Go to [mat] and create [standard surface] for the material node of [crag]. Then connect the [Shader] of [Standard surface] to the [Shader] of [OUT material].

Then add the material colors to the [shader] of the [standard surface].

Add the material colors to [crag hammer] in the same way.

The Group in [material] allows you to select the model for which you want to create a material, and in [material]’s [Material] is where you select the corresponding material path.

Next, we can adjust the [Arnold] model material’s specific values to make it more realistic. [Shift + left mouse button] allows you to create a partially rendered wireframe in [Render View]. This action may increase the speed of the render test. (If you want to resume rendering the entire Render View, box the entire interface.

Adjust the material value of [crag hammer] first.

Adjust the material values of the [crag].

There are some interesting material nodes here.

[flat]:

[curvature]:

Adding [ramp rgb] allows you to adjust the [curvature] colour:

You can connect [curvature] and [ramp rgb] to [standard surface] to output the final model material together. (I changed the colour of [ramp rgb].)

You can also add [ramp rgb] to add shadow details to your model. Use [r] of [ramp rgb] for output to [Specular roughness] of [standard surface]. (You can add darker colours to the shadows)

Only part of the final particle is rendered.

Create the [Arnold] node for [articles] on the [mat] screen.

I needed to add the [Arnold] attribute to [particles] because the file that I created previously did not have the [Arnold] attribute. I started by creating a new [geometry] node for [object]. Create [object merge] once in [geometry] and select the node that [particles] will end up outputting under [Object] path.

When you go back to the [obj] screen, you can see that the particles are in the rendering screen. In [Material], select the material that I have created before.

Create a [user data float] node at [articles] in [mat].

Create a [user data rgb] node to add the previously created particle colours to the material. Connect [Base Color] of [standard surface] to [rgb] of [user data rgb].

Connect the [range] node.

Adjust the properties of [Arnold] on the [out] screen. Turn on the properties of [Transform Keys] and [Deform Keys] in [Motion Blur] of [Properties].

This entry was posted in Houdini. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *