Week 6: Modeling and Animation

Adjustments to previous Panda projects this week:

1. Problem: When I imported the panda model into Unity, it wouldn’t have any material.

Solution: Since the material I created was [aiStandardSurface], this material shader did not work in Unity. So I’ve converted the material to the [Lambert] material that comes with Maya, and the model will now display the material in Unity.

2. Problem: After I imported the Panda model into Unity, the model’s head would disappear.

Solution: Some of the histories of the model has not been completely cleared. After selecting the model group, select [Hierarchy] in [Select] to select all files. Select [Delete by Type] in [Edit], then [Non-Deformer History] to clear the model’s extra history.

I started making models of the cat piano and the bent over the litter picker character this week. Here are some of my reference posters.

Cat Piano:

Figures bending over to pick up the garbage:

Requirements for the model:

1. The cat piano must be modeled in a block-like style, with the keys stackable at the top of the cat’s body model. The model does not need to be animated because the game students have previously programmed games similar to the cat piano.

2. The character model that picks up the garbage needs to be animated by bending over.

The process of making a piano for the cat

First, I created [Cube] in Maya, and then I deleted the faces that I didn’t want to keep. Then, in [Multi-Cut], I used [Shift+Right Mouse Button] to cut the model to the same shape as the cat’s head on the face I was holding. Although the cat’s head’s oval shape can be made faster by either a ball or a cylinder, it is easier to use [Cube] for wiring.

Next, I needed to make a cat’s ear model. I used [Vertex] mode to adjust the wiring of the cat’s head, then selected the face, and [Shift+right mouse] selected [Extrude] to extrude the thickness of the cat’s head. At this point, the cat’s ears will be the same thickness as the head of the cat. In order to adjust the wiring of the cat’s head model, it is important to keep one point at the edge and one at the center of the cat’s ear to ensure that the cat’s ear is correctly positioned during the Extrude process. After adjusting the wiring of the cat’s head, select the face of the ear (as I’ve already adjusted the base face of the cat’s ear, I can now select the correct face of the model) and work on [Extrude]. During making the ears, I adjusted the position of the [Vertex] model every time I [Extrude] a face so that the shape of the ears would be the same as that of the poster. I wanted to make a direct copy of the other half of the face and ears using the central axis symmetry method after making one of the cat’s ears, but I found that the ears and face of the poster were not symmetrical! So I used the same steps to make the other ear of the cat.

The model of the cat’s head:

Start modeling the cat’s body. Use [Cube] to create a model with the same shape as the cat’s body, adjust the wiring model to keep the cat’s leg model face.

Use [Extrude] to create a model of the cat’s legs after selecting the cat’s body model’s faces. I kept creating [Cube], then using [Extrude], then adjusting the model’s wiring.

Creating a keyboard for the piano on a cat:

Creating materials for the piano of the cat:

To make sure that the model and poster look similar, I will create some material maps to add to the model. I separated the front and back of the cat’s face and the ears’ sides when I created the UVs. The ears have separate material maps, ensuring that the ear maps do not produce material map deformation. The cat’s materials are all made from Maya’s own [Lambert] material, which ensures that the model also displays Unity’s material.

The model of the cat after adding the materials:

Creation of the background and the hammers used to hit the keys:

The process of creating a model of a figure bending over to pick up the garbage:

In order to design a character model bending over to pick up garbage, I initially wanted to create a character model and then make bone bindings to create keyframe animations. The game, however, requires the character’s arm trajectory to remain in vertical motion mode. If I had used the whole character to animate the keyframe, the hand model would have had a curved trajectory. So I decided to separate the hands of the character model from the body of the character. I also decided to be creative with the figure’s waist’s bending, creating a spring-like waist (the figure’s waist is modeled as a segmented model that rotates when it is bent over).

First, the body of the figure, the head, the hat, the arms:

For this part of the model, create [Cylinder] and then adjust the position of the model’s points so that the model resembles the shape of the poster.

Creating the waist of the figure, the legs:

Creating a character model of a hand:

This is the first time I modeled a character’s hand, and I found it very interesting. I first created the palm and then used [Extrude] to extrude the fingers. As for the hand wiring, I noticed that the space between the fingers needed two lines to be left open. The tips of the fingers on the face of the model must be smaller and smaller.

The first version of hand:

At this point, the edges of the model’s hands were still very stiff, and I needed [Smooth] to round off the edges. Since the number of faces on the model would increase with Smooth, I manually removed many lines and faces. To reduce the number of faces as much as possible, I removed the top face of the hand, selected the top line of the hand, and added a face using [Fill Hole].

The final model of the hand is shown as follows:

Create patterns of the legs, eyes, and mouth of the figure:

Create a paper ball on the ground and give it a different texture of the material.

Creating the following backgrounds:

Making the material:

In order to create a feeling that the model and the poster blend, I decided to make the model of the figure a blue translucent material. All materials are still [Lambert] material. Switching down the [transparency] value in [Lambert] will give the material a translucent feel.

Model animation:

At first, my idea was to create properties that would control the model’s pose and then create keyframe animations. I was worried about Unity reporting errors, so I finally decided to change the waist model’s axes to be closer to the bend. This would allow the keyframes to be created by adjusting the position of the model.

The final effect of creating the animation keyframe:

Effect of the export to Unity:

The sum of this week:

All of the models and animations for my collaborative project have been completed this week. I will also update the next BLOG if the game team needs to adjust the models when tested. So far, our collaborative project has been going well, and we are all very aware of the effectiveness of our communication. I am very interested in modeling and binding, and in the next few days, I will learn to model and bind characters myself.

Posted in Collaboration Unit | Leave a comment

Week5: Modify, Rigging and Modeling

This week’s main work on my collaborative project was to revise the model bindings of the previous week and create a new poster model.

Part I: Modifying the model bindings of the previous week

When I saw the game professional’s feedback, I realised that my panda’s surface was not smooth, and I realised that using the [3] key in MAYA to simulate a rounded effect would not have the same effect in Unity. So, in Maya, I needed to add edges to the panda model to achieve a rounded model.

After selecting the MAYA interface model, hold down [Shift] and the right mouse button and select [Smooth] to automatically generate a circular edge to achieve the rounded effect of the model. Smooth] is equivalent to the effect after the [3] key. The [3] key is only used as a preview.

I had feedback from the game majors that the panda model I created in Unity misaligned arms after imported the model.

After checking the model, I found that although I had frozen and cleared the model history during the last model build, this time, there was still a residual model history.

Once I deleted the history of the model, the model was finally properly imported into Unity by my team members.

A new problem arose, but the model animation created with the form of the Blend did not play the animation in Unity. My partner told me that the Blend shape is imported into Unity as a model attribute and therefore does not play smoothly. For Unity, the best way to use skeletal binding is to create model animations. My model already had the initial Pose action, so using the Bind Bone plugin didn’t help my model generate the right bones in a single click. So I started exploring the bones of the model manually.

You can start creating bones by selecting [Create Joints] from [Skeleton] in the [Rigging] module.

In the process of creating bones, we need to be careful: we need to create bones from larger to smaller joints. The shoulder, for example, can drive the arm and the wrist. First, you need to create the bones of the shoulder joint, then the bones of the arm joint, and finally the bones of the wrist joint.

I created the bones of the arms and the head of the panda separately. Then I started the process of skinning. Select [Bind Skin] from [Skin] in the [Rigging] module. The bones are now bound to the model at this point.

Next, I need to paint the weights to which the model is bound. Select [Paint Skin Weights] from [Skin] in the [Rigging] module after selecting the model.

However, during the process of painting the weights, I noticed that the parts of the model that were painted with the weights would change. After discussing this with my partner, we decided that the reason for the error was that the weights of the models interacted with each other, and because I did not create the bones of the spine in my model, the weights of the body torso changed as I did.

I started to recreate the bones of my body. In the thoracic section, add a bone to the original skeleton. In the [Rigging] module, select [Connect Joints] from [Skin] to connect the bones of the chest to the bones of the shoulder on either side. Note that you must first select the thoracic bones and then the shoulder bones. (Selection of children first)

Re-weighting:

Create NURBS Circles to parent-set limitations on curves and bones.

Final result:

Part II: Creation of a Soldier Poster Model

This week, I’ve been working on creating models for the posters of a soldier.

A list of the models I need to create:

1. The head of the soldier

2. Helmet Soldier and helmet straps

3. The branch

4. The leaves

My reference poster for the creation of this model:

1. The head of the soldier

I started by importing a head model from the model library. Then I started to modify the lines and faces of the model.

I am finishing the pattern of the head to resemble the shape of the poster.

2. Helmet Soldier and helmet straps

First, create a ball, then use [soft collection]. Push the dots on the bottom surface of the ball into the model. Then change the points, lines, and faces of the helmet.

This is the situation of the current model:

The bottom of the head model of the figure is not currently stitched.

After selecting the line at the bottom of the Edge mode model, [Shift plus the right mouse button] and then [Fill Hole], this will add the bottom of the model.

In [Face] mode, select the face of the model, and then [Shift plus the right mouse button], select [Poke Face]. This allows us to give an even alignment to this face and the lines on the model.

Start creating the helmet straps. I started by creating a curve for the CV. I’m going to use polygon to extend the curve to create a model. First, I created only one side of the model to use [Mirror] to create the model’s symmetrical half.

To extend the curve using a polygon, add a cube to the top of the curve, select the face where the cube and the line are touched, then [Shift] select the CV curve and use [Ctrl + E] to extrude the model.

Increasing the value of [Divisions] increases the number of extruded model segments so that the model is more similar to the shape of the CV curve.

After creating the helmet straps, I needed to model the decoration of the leaves and branches. I created the base faces and then used the [Shift + Right Mouse Button] button and selected [Multi Cut] to cut out the model’s desired shape. After selecting the cut faces, I used [Extrude] to extrude the thickness of the model.

I added the [aiStandardSurface] material to the model and then adjusted the model’s characteristics and colors.

This is the final result of the model of the soldier:

Summary of this week:

Thanks to my friends for their help. I’ve learned many new ways to make models with Lucy, Yann, and Sean’s help. This week, I’ve learned a bit about manual binding and weighting for models, I’ve also reviewed my modeling approach, and I’ve learned some new tricks. Compared to last week, my modeling speed has improved this week. I’m going to keep working on it!

Posted in Collaboration Unit | Leave a comment

Continuing Performance Animation

Part I: Qualities of a Good Animator

Darren Aronofsky, American film director, says “An animator has to live life twenty-four times than a normal human being do that is every twenty-four frames of a second.”

When making an animated film, the animator must keep all aspects of the character’s movement in mind, as the animator needs to make every movement of the model. Animators also need to observe and learn from all around them. Observing the things and the people around them, and analyzing their behavior, helps the animator shape the characters’ movements. Animators must also understand the camera angles and the projection of different objects in a given frame. Constant practice in taking photographs in life can help to understand the angles of perspective.

10 TRAITS OF GOOD ANIMATOR:

  1. Creativity

The design of the character must be attractive. It should make it possible for the audience to like the character. So animators need to be creative and need to keep the character simple while paying attention to the details. Complexities related to them should not exist. This is what makes them more accessible and friendly. Thinking out of the bounds helps us to be good animators and make character funnier. Creativity is what makes you think differently from others. Animators should know how to shape anything simple into something interesting artistically.

2. Humorous

Humor is the key to becoming a good animator. If you have a good sense of humor, your character will automatically portray it. Comedy is the key to friendliness. Humor reflects the movement and behavior of the characters. Since almost all animated films are aimed at children, they are made in the comedy genre. So it’s essential to instill a sense of humor in your character.

3. Patient

Patience is what makes an animator, a good animator, patient with your work. You know that Rome wasn’t built in a day. Don’t expect your work to be completed as soon as possible. The animation is a job of patience and hard work. Those animators who put extra time into their details are going one extra mile further.

4. Good Observation

We learn from our observations. We have to instill this quality in ourselves to be a good animator. A good observer learns from the things around him and tries to apply them to his characters. They’re always studying the movements of objects around him. They often look for interesting movements, walks, and funny expressions on their faces while talking. The observer tries to apply these qualities to the character and exaggerates them to make them more fun.

5. Learn Acting

This may sound a little different. But if you don’t know anything about acting, then how can you make your characters act. This will help you a lot in studying the body mechanics of a movement. Acting on a scene yourself will let you learn a lot about the mechanisms of the body. You don’t have to be a good actor for this. Just understand the basics of it. Learn to observe and analyze acting professional actors, plays, and exciting performances. Also, don’t forget to notice why and how you feel after watching them. All animation characters are inspired by reality and enhanced to make them more entertaining.

6. Interpretation

It is essential not only to learn from our reference but also from their reference and to interpret it correctly. A good animator knows the aim of the project and works towards its achievement. He tries to adapt the reference work accordingly. He has a good idea of a particular scene, its motive, and its importance in the film. It’s not enough to analyze your reference work. It would be best if you also analyzed its inspiration to get better at it. You may find a better way to do this. An animation project’s look and feel, whether it’s a film or a game, is critical.

7. Be Motivated

The motivation for a good animator is a crucial quality. If you’re not motivated, then it’s hard to be good at anything you want in your life. If you have an idea, just take the paper and start drawing. Don’t wait for the next day to come. This is how you can remain motivated.

8. Detailing

A good animator always pays attention to details. An animated sequence is replicated on a reality-inspired screen. It is an essential skill to detail every little thing about the character. Adding detail is an essential ethic for the animator’s work. You’ve got to take some extra time to give your character a perfect finishing touch.

9. Team work

The animation is teamwork. It is imperative to work together with your team member to be successful. You must have a good nature and a cooperative attitude. Different people are working on different tasks. You should know how to work with them, listen to them, and work together to achieve the project’s goal.

10. Toughness

You don’t have to be sensitive when someone else analyzes and criticizes your work. You have to learn from your mistakes. If someone doesn’t like it doesn’t mean you don’t know how to draw a cartoon. It means that cartoons still need some improvement. Just pay attention to all these features, and you know you’re on the right track to becoming a good animator. The distance between becoming a good animator and becoming a great animator is practicing. The more you practice, the better you get. So these were some important traits for becoming a good animator.

Posted in Advanced and Experimental 3D Computer Animation | Leave a comment

Houdini Tutorial Week 4

This week, we’re focusing on Houdini’s materials, lighting and rendering features.

Linear workflows:

Linear workflow has become the industry standard that most studios have adopted because of the power and flexibility. It’s much easier to adjust the level of reflectivity of objects or change the base color of a single model part in Photoshop instead of fine-tuning and 3D rendering. This is what the PixelSquid product offers the end-user a great deal of flexibility. Each layer or element rendering contributes to the final image. For this reason, PixelSquid content must be generated in a linear workflow. The diagram below shows the basic flow of a linear pipeline.

Another way to look at it is as a series of curves applied to different inputs and outputs. If you’re working in a linear pipeline, you’ll need to correct certain types of images so that they output the correct result. When a bitmap created as an sRGB image enters the pipeline, it needs to be inverse gamma corrected to work properly. Then, when you output the renders, they’ll need to be corrected to appear as expected on your monitor. Working this way will ensure that the math adds up correctly.

Another way to look at it is like a series of curves applied to different inputs and outputs. If you’re working in a linear pipeline, you’ll need to correct specific images to output the correct result. When a bitmap created as an sRGB image enters the pipeline, the inverse gamma needs to be corrected to work correctly. When you output the renders, they need to be corrected to appear as expected on your monitor. Working this way will make sure that the math adds up correctly.

Linear lighting and color:

High-dynamic-range imaging:

High-dynamic-range imaging (HDRI) is a technique used in photographic imaging and film and in ray-tracked computer-generated imaging to reproduce a wider range of luminosity than is possible with standard digital imaging or photographic techniques. Standard techniques allow for differentiation only within a certain range of brightness. There are no visible features outside this range because, in the brighter areas, everything appears pure white and pure black in the darker areas.

The ratio between the maximum and the minimum total value of the image is known as the dynamic range. HDRI is useful for recording many real-world scenes that contain very bright, direct sunlight to extreme shade or very faint nebulae. High-dynamic-range (HDR) images are often produced by capturing and then combining several different, narrower ranges of exposures of the same subject.

The two primary types of HDR images are computer rendering and images resulting from the merging of multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. You can also purchase HDR images using special image sensors, such as an oversampled binary image sensor. Due to printing and display contrast limitations, the extended luminosity range of HDR input images must be compressed to make them visible.

The method of rendering an HDR image to a standard monitor or printer is called tone mapping. This method reduces the overall contrast of an HDR image to make it easier to display on devices or printouts with a lower dynamic range and can be used to produce images with preserved local contrast (or exaggerated for artistic effect).

“HDR” may refer to the overall process, the HDR imagery process, or the HDR imagery shown on a low-dynamic display such as a screen or a standard.jpg image.

Displacement Rendering:

Displacement maps can be an excellent tool for adding surface detail that would take too long using regular modeling methods. Displacement mapping differs from bump mapping in that it alters the geometry and therefore has the correct silhouette and self-shading effects. Depending on the type of input, the displacement can occur in two ways: Float, RGB & RGBA inputs will displace along with the normal input while the vector input will displace along the vector.

The example above shows how a simple plane, with the addition of a displacement map, can produce an interesting, simple-looking scene.

You should ensure that your base mesh geometry has a sufficient number of polygons. Otherwise, there may be subtle differences between the low-resolution mesh and the high-resolution mesh from which it is generated.

https://docs.arnoldrenderer.com/display/A5AFMUG/Displacement

Displacement Maps:

When it comes to creating additional details for your low-resolution mesh, the displacement maps are the king. These maps physically move (as the name implies) the mesh to which they are applied. In order to create details based on a displacement map, the mesh must usually be subdivided or truncated so that real geometry is created. The great thing about displacement maps is that they can be either baked from a high-resolution model or hand-painted. A displacement map, like a bump map, consists of grayscale values. Here’s the real kicker, though. While you can use an 8-bit displacement map, you will almost always experience better results by using a 16-bit or 32-bit  displacement map.

While 8-bit files may look good in 2D space, they can sometimes cause banding or other artifacts due to a lack of value. Now, this isn’t that great about displacement maps. Creating all that additional geometry in real-time is extremely difficult and difficult for your system. As a result, most 3D applications calculate the final displacement results at the time of rendering. Compared to bump or normal maps, a displacement map will also add a significant amount of time to your renders. As a result of this additional geometry, the results of the displacement map are hard to beat. As the surface is modified, the silhouette reflects the additional geometry. Before you decide to use one, you should always weigh the displacement map’s cost against the added benefit.

Normal Maps

Normal maps may be referred to as a newer, better type of bump map. As with bump maps, the first thing you need to understand about normal maps is that the details they create are also fake. There is no additional resolution added to the geometry of your scene. In the end, a normal map creates an illusion of depth detail on the surface of a model, but it does it differently than a bump map.

As we already know, the bump map uses grayscale values to provide up or down information. The normal map uses RGB information that corresponds directly to the X, Y and Z axis in 3D space. This RGB information tells the 3D application the exact direction in which the surface normals are aligned for each polygon. The orientation of the surface normals, often referred to as normal, tells the 3D application how the polygon should be shaded. In learning about normal maps, you should know that there are two completely different types of maps. When viewed in 2D space, these two types look completely different.

The most commonly used is called the Tangent Space Normal Map and is a mixture of primarily purple and blue. These maps work best for the mesh that has to be deformed during the animation. Tangent Space Normal Maps are great for characters like stuff. For assets that do not need to be deformed, a normal Object Space map is often used. These maps have a range of different colors as well as slightly improved performance over Tangent Space maps. There are some things you need to be aware of when you consider using a normal map. Unlike a bump map, these types of maps can be very difficult to create or edit in 2D software such as Photoshop. You’ll probably bake a normal map with a high-resolution version of your mesh.

However, there are some exceptions for editing these types of maps. For example, MARI can paint the type of surface information that we see in a normal map. Normal maps are pretty well integrated into most pipelines when it comes to supporting. In contrast to the bump map, there are exceptions to this rule. One of them would be the mobile game design. Hardware has only recently evolved to the point where mobile games are starting to adopt normal mapping into their pipelines.

Bump Maps:

Bump maps create an illusion of depth and texture on the surface of a 3D model with computer graphics. Textures are artificially created on the surface of objects using grayscale and simple lighting tricks, rather than creating individual bumps and cracks manually.

A bump map is one of the old types of maps we’re going to look at today. The first thing you need to understand about bump maps is that the details they create are fake. As a result of a bump map, no additional resolution is added to the model. Typically, bump maps are grayscale images limited to 8 bits of color information. It’s just 256 different colors of black, gray or white. These values are used in the bump map to tell 3D software two things. Up or down, man.

When the bump map values are close to 50% gray, there is little to no detail on the surface. When values get brighter, working their way to white, the details seem to pull out of the surface. By contrast, when values get darker and closer to black, they seem to be pushing the surface. Bump maps are great for creating tiny details on a model. For instance, pores or wrinkles on the skin.

They’re also relatively easy to create and edit in a 2D application like Photoshop, considering that you’re just using grayscale values. The problem with bump maps is that they break easily if viewed from the wrong angle by the camera. Since the detail they create is fake and no real resolution is added, the geometry silhouette applied to the bump map will always be unaffected by the map.

OpenEXR

OpenEXR provides the specification and reference implementation of the EXR file format, the film industry’s professional image storage format. The format’s purpose is to accurately and efficiently represent high-dynamic-range, linear image data and associated metadata, with strong support for multi-part, multi-channel use cases. The library is widely used in host application software where accuracy is critical, such as photo-realistic rendering, texture access, image composing, deep composing, and DI.

ACES:

ACES is a color system designed to standardize how color is managed from all kinds of input sources (film, CG, etc.) and provide a future-proof working space for artists to work at every stage of the production pipeline. Whatever your images come from, you’re smooshing them to the ACES color standard, and now your entire team is on the same page.

ACEScg color gamut is a great advantage for CG artists, a nice big gamut that allows for a lot more colors than the old sRGB. Even if you’re working in a linear colorspace with floating-point renders, the so-called “linear workflow,” your primary color (which defines “red,” “green,” and “blue”) is probably still sRGB, which limits the number of colors you can accurately represent.

Working with render nodes:

We can go to [Rend View] to see the rendering screen. The effect of material nodes and shaders on the model can be observed in [Rend View] in real-time. Clicking on [Render] will make it possible for us to render in real-time.

Create [Light] in [Object] and re-test the rendering.

Mantra

Mantra output driver node uses mantra (Houdini built-in renderer) to render your scene. You can create a new mantra node by selecting Render option Create a render node from Mantra in the main menus. You can edit the existing render node with the Render License Edit render node with the node’s name. To see the actual network of driver nodes, click the path at the top of the network editor pane, and choose Other networks that are not supported.

You can add and remove properties from the output driver just as you can for objects. If you add object properties to the rendering driver, all objects in the scene are defined by default. Select the render node, click the Gear menu in the Edit parameter editor, and choose Edit the rendering parameters to edit the node’s properties. See Properties for more information on properties. See the Edit Parameter Interface for more information on adding properties to a node.

For complex scenes involving multiple rendering passes, separate lighting and shadow pass, and so on, you can set up dependency relationships between rendering drivers by connecting the driver nodes. See the rendering of dependencies.

https://www.sidefx.com/docs/houdini/nodes/out/ifd.html

Karma solaris

It’s a new render engine, it’s only used in USD, and it’s still a better one for now. Karma Solaris didn’t use a lot that our teacher said nobody used in production because it’s very recent.

Render region tool:

https://www.sidefx.com/docs/houdini/props/mantra.html

How to create or set up a Houdini camera:

https://www.sidefx.com/docs/houdini/render/cameras.html

Environment Light in Houdini:

Environment lighting adds light to the scene as if it had come from a sphere surrounding the scene. Usually, the light is colored using an image called an environment map. The environment map can match the lighting (and reflections) of the scene to the real world’s location, or it can be used to add interesting variations to the lighting of the scene.

Realistic environmental lighting is cheap to render in a PBR rendering setup.

A typical PBR lighting setup will use the environment light to create the base light level and the area lights to represent a motivated light source.

Recommended use of HDR texture resource sites:

https://hdrihaven.com/

Import the HDR image file in the [Environment Map] node of [Environment Light]. This will add the lighting effect to the environment under the influence of the HDR image environment. (This will make the model render more natural)

The [Area Light Options] value in the [hlight] node affects how soft or hard the shadow is.

Video:

8 bit, 10 bit, 12 bit.. What do these mean?

8 bit color:

In TV’s, each individual value represents a specific color in a color area. When we talk about 8 bit color, we’re basically saying that TV can represent colors from 00000000 to 11111111, a variation of 256 colors per value. Since all TVs can represent red, green, and blue values, 256 variations of each mean that TVs can reproduce 256x256x256 colors or 16,777,216 colors in total. This is considered a VGA and has been used as a standard for both TVs and monitors for many years.

With the advent of 4K HDR, we’ll be able to push a lot more light through these TVs than ever before. We need to start representing more colors, as 256 values for each primary color will not reproduce almost as lifelike images as something like 10 or 12 bits.

10 bit color:

With the advent of 4K HDR, we’ll be able to push a lot more light through these TVs than ever before. We need to start representing more colors, as 256 values for each primary color will not reproduce almost as lifelike images as something like 10 or 12 bits.

10 bit color can represent between 0000000000 to 1111111111 in each of the red, blue, and yellow colors, meaning that one could represent 64x the colors of 8-bit. This can reproduce 1024x1024x1024 = 1,073,741,824 colors, which is an absolutely huge amount more colors than 8 bit. For this reason, many of the gradients in an image will look more more smooth like in the image above, and 10 bit images are quite noticeably better looking than their 8-bit counterparts.

12 bit color:

12 bit color ranges from 000000000000 to 1111111111, giving this color scale a range of 4096 versions of each primary color, or 4096x4096x4096 = 68,719,476,736. While this is technically a 64x wider color range than even 10-bit color, a TV would have to produce images bright enough actually to see the color difference between the two.

Light object node:

Light Objects are objects that shed light on other objects in a scene. With the light parameters, you can control the color, the shadows, the atmosphere and the quality of the light-lit objects. Lights can also be viewed and used as cameras (Viewport > Camera menu).

https://www.sidefx.com/docs/houdini/nodes/obj/hlight.html

Lights and shadows:

https://www.sidefx.com/docs/houdini/render/lights.html

[Point]:

[Line]:

[Grid]:

[Disk]:

[Sphere]:

[Tube]:

[Distant]:

[Sun]:

Material and Arnold

You can create a [grid] node on the [Object] screen and then go to the [grid] node. Use the [material] node to connect to the [grid] node. The material can be selected from the [Group] section of [material].

You can also adjust the material directly to the [grid] node on the [Object] screen. Select the [grid] node, then select the material from [Material] under the [Render] property bar.

Principledshader:

https://www.sidefx.com/docs/houdini/nodes/vop/principledshader.html

Principledshader(in mat interface):

https://www.sidefx.com/docs/houdini/nodes/shop/principledshader.html

Recommended websites:https://opencolorio.org/

Arnold for Houdini user guide:

https://docs.arnoldrenderer.com/display/A5AFHUG/Sampling

Download the OCIO:

1. Download the zip code and unzip it in the full English directory.

2. Open [Edit System Environment Variables] and choose [Environment Variables].

3. Select [New].

4. Type in [OCIO], select the file you want to add and find the installation file’s latest version. (Use [config.ocio] from ACES 1.2 for this time)

5. Click on the [Confirm] button. Now re-open Houdini and we’ll see the [Mantra Render View] installed.

Download the ACES 1.2 at:

https://github.com/colour-science/OpenColorIO-Configs/tree/feature/aces-1.2-config

[arnold]:

[arnold_light]:

[uvproject]:

Particle rendering test:

Create [Arnold] in the [out] module first.

Then create [arnold light] on the [obj] screen. Set the [Light Type] property of [arnold light] to [Skydom]. Set [Color Type] to [Texture], then add texture to the HDR environment.

Please add the [Arnold] node to the [mat] screen.

Because crag models initially have mantra materials, we need to add [attribdelete] to delete the model’s initial materials.

After deleting the crag model’s initial material, connect the [material] node to make the [Arnold] material for the model. (In the [material] node, connect the [Arnold] node to the crag already created on the [mat] screen.)

Go to [mat] and create [standard surface] for the material node of [crag]. Then connect the [Shader] of [Standard surface] to the [Shader] of [OUT material].

Then add the material colors to the [shader] of the [standard surface].

Add the material colors to [crag hammer] in the same way.

The Group in [material] allows you to select the model for which you want to create a material, and in [material]’s [Material] is where you select the corresponding material path.

Next, we can adjust the [Arnold] model material’s specific values to make it more realistic. [Shift + left mouse button] allows you to create a partially rendered wireframe in [Render View]. This action may increase the speed of the render test. (If you want to resume rendering the entire Render View, box the entire interface.

Adjust the material value of [crag hammer] first.

Adjust the material values of the [crag].

There are some interesting material nodes here.

[flat]:

[curvature]:

Adding [ramp rgb] allows you to adjust the [curvature] colour:

You can connect [curvature] and [ramp rgb] to [standard surface] to output the final model material together. (I changed the colour of [ramp rgb].)

You can also add [ramp rgb] to add shadow details to your model. Use [r] of [ramp rgb] for output to [Specular roughness] of [standard surface]. (You can add darker colours to the shadows)

Only part of the final particle is rendered.

Create the [Arnold] node for [articles] on the [mat] screen.

I needed to add the [Arnold] attribute to [particles] because the file that I created previously did not have the [Arnold] attribute. I started by creating a new [geometry] node for [object]. Create [object merge] once in [geometry] and select the node that [particles] will end up outputting under [Object] path.

When you go back to the [obj] screen, you can see that the particles are in the rendering screen. In [Material], select the material that I have created before.

Create a [user data float] node at [articles] in [mat].

Create a [user data rgb] node to add the previously created particle colours to the material. Connect [Base Color] of [standard surface] to [rgb] of [user data rgb].

Connect the [range] node.

Adjust the properties of [Arnold] on the [out] screen. Turn on the properties of [Transform Keys] and [Deform Keys] in [Motion Blur] of [Properties].

Posted in Houdini | Leave a comment

Week4: Modeling, Blend Shape, Substance Painter texture

This week, Lucy and I are ready to start building poster models and animations. It’s my primary responsibility this week to model and animate the panda.

A list of models that I need to create:

1. Panda The Panda

2. Glass of wine

3. Background space model and 3D characters

My reference poster for creating this model:

I need to create a 3D panda model and create an animation of a panda model that raises a glass.

Study the Panda cartoon and create a three-view of the Panda model:

I knew the model I was creating was based on the panda image in the poster, but I thought it might be useful to gather some panda-related material to give me some more specific design ideas when creating the model.

I started by collecting some panda images online.

The Panda of Real Life:

There are some cartoon images of pandas:

Three views of the panda that I’ve drawn:

The process of creating a panda model:

I started by putting a three-pronged view into the scene, and then I started making the panda model body. This was the first time I had made a model of character that needed to move. I can imagine that I’m going to encounter many situations during the process, but the best way to learn is to practice. I’m going to try my best to make my panda model.

I started with the creation of a [sphere], and then I stretched the model. Then I added a few looped edges to the model and adjusted the position of the points. Then I removed half of the model and prepared to create a full-body by mirroring the model. The important thing to note is that the center of the model’s points needs to be moved to the central axis.

Please use [Mirror] in [Mesh]. You can create the other half of the body. You can also duplicate the model, flip it on the corresponding axis, and then connect it to the original model.

I started to model my arms. I added lines and points to the original model, and I adjusted the lines’ layout in my model. I create the lines of my arm in this way.

I chose the face in the arm position, and then I used [Extrude] to extrude the arm model.

The legs are made in the same way as the arms by selecting the faces and then extruding them using [Extrude].

The panda body model looks pretty good at the moment.

So I started to model the head of the panda.

I made two versions of the head of the panda. The first version of the head I made in three views, but probably due to the number of lines and volume in the panda’s eyes, nose and mouth, the model looks ugly.

This is the first version of the head:

After summing up my failures, I started to modify the panda header model for the first time. This time, I wanted my panda’s head to look more like a cartoon, so I thought I might try to change the base model (Cube) and adjust the lines and points’ position.

The second version of the head of the panda after the changes:

The panda’s head looks a lot better this time.

Make a model of a wine glass prop:

The game students wanted to create a panda model loop animation. Lucy suggested that I might try [Blend Shape] to create this animation.

For example, the first keyframe in the model is the initial stationary action, and we can set the second keyframe to the cup-raising action. You don’t need to create a skeleton to use [Blend Shape]. It would help if you changed the pattern movement. It is the same as making two [posses] on one model and then using the [Blend shape] property in the properties bar to control the model. Generally speaking,[0] is equivalent to the initial action and[1] to the final action. You can animate the model in a circular motion by creating a keyframe for the [Blend Shape] property in the timeline. First, select the model in its initial state, and then select the transformed model. Then select [Blend Shape] from the [Deform] property bar. But because I changed the position and the line point of the model, there was no way to apply [Blend shape] to these two models. So I decided to use [Animation Editors] on the [Windows] interface and then choose [Shape Editor]. I used [Shape Editor] to modify my panda model.

Tutorial for [Blend shape]:

I’ve had many problems with mine due to my inexperience in making models (the models did not look round and were very different from the image on the poster). Lucy, Yann and Sean helped me analyze the issues and gave me some advice.

Issues with the panda model:

1. The panda model is not in its initial state of holding a glass, so I need to significantly adjust the panda model.

2. The panda position in the poster is rotated about 30° to the side, so I still need to adjust the position of the body of the model.

3. After the modification, my model appears uneven. I still need some brushes to adjust the smoothness of the model.

4. Ideally, the model’s final pose should be the same as the panda in the poster.

Continue to modify the model:

First, after rotating the model, I made sure that the right arm of the panda could be adjusted to the desired image by changing the position of the point. However, the panda’s left arm was not in the correct position, so I had to delete the left arm of the model and then choose a new face in the Extrude model.

To change the model to make it look more rounded, I used the [Sculpting] brush to additions.

I also used [Soft Selection] to select points and faces to adjust.

The model now looks a lot better than the original one.

Version 1 and Version 2:

I used the [Shape Editor] to create the [Blend Form]:

First, select a body model in its initial state, then select [Create Blend Shape] from the [Shape Editor], then click [Add Target]. Then click [Edit] to activate the [Edit] icon and turn it red. The [Blend shape] property in the model properties bar will control the model at this point. [0] corresponds to the initial pose; [1] corresponds to the final pose.

To create keyframe animation, I created a circular motion by controlling the value of [Envelope] in the [Blend Shape] model properties bar. I also added model shifts to the panda’s head and shoulders so that when the panda raises its hand, the shoulders and the head follow the movement to make the animation look more natural. I created the keyframe for the wine glass animation directly after selecting the model.

Testing the [Blend Form] animation:

Creation of backgrounds and 3D lettering:

For the simple background, I created [Cube] directly later, sorted out the UV, imported it into the Substance Painter for filling and coloring, and finally exported the material’s mapping. The text model was created by selecting [Type] from the [Create] property bar in MAYA.

Current effects on the background:

I will continue to refine the backgrounds according to the scene’s new requirements once the game majors have tested the program.

UV and Substance Painter material mapping the panda:

This time, I didn’t do a particularly good job of UVing my panda model. I used [Cut] and [Unfold] to crop and expand in the UV Editor. The UVs I made looked a little strange because of the model wiring problem. I often used models already made by the project team when I used to make UVs before, so I didn’t have any problems with the model wiring. Luckily, this did not affect the final result. I will continue to explore ways to make UVs in models that I create in future projects.

Current panda test model animation.:

The sum of the week:

This week, I had my first attempt at modeling a character, and the modeling process was really interesting. Since I am not qualified in modeling characters, my model’s design (such as the layout of the lines) did not make sense. Although the model created was a low poly model, I would like to explore more about modeling. I will continue to explore the creation of patterns of characters. I’m using [Blend Shape] to help the model generate movement, and I hope to learn something about rigging next week and put it into practice. Thank you to Lucy, Yann, and Sean for their help this week.

Posted in Collaboration Unit | Leave a comment

Week3: Scene Modeling

This week Lucy and I started working on modeling. During this week’s work, I have learned the most important skill apart from my professional knowledge: team project coordination. We had some great ideas for the game during the previous week and the models’ conception and design. This week our team needed to produce concrete models to showcase our design ideas. This week we are communicating and tweaking as we go along. Although we all have many assignments, we are all trying our best to coordinate our time to complete our collaborative project tasks.

Based on the game design and layout drawings from the previous week, I will start by building a simple scene model. My idea is to build the most basic scene model in the shortest time possible, not to waste and take up task time. I have received some gallery scene images that I can reference.

Our game is a VR game where the player can interact with the posters in the scenario. When I talked to the students about the game, I found out that their design for the game was that they wanted the posters to be displayed on a table. We hope to eventually present an interactive poster built from a 3D model (similar to a full 3D model) in the game.

The result is similar to space where the player can look at the 3D model in a picture frame and interact with the model to complete the game. So my initial design idea was to create a larger space for the gallery scene model to place our 3D models (3D posters) in order to avoid the cramped feeling of the player having to preview a large number of models in a small space.

In the process of making it, I also discovered a lot of situations that I had not considered before. As our models needed to be interactive in the design concept, the posters’ distance also needed to be taken into account. However, like a VR experience game, the layout of the gallery’s scenes also has a key decision on how the audience will experience it. So I created two versions of a simple scene model first to preview the effect first. If not, we’ll have plenty of time to make changes. (The small space inside that is separated separately is used for the 3D poster models)

Version 1:

Version 2:

When I shared the simple scene model with Lucy, I found that we had a few different ideas, and Lucy shared with me the layout she had drawn. In this layout, the posters are hung on the wall, and there are tables on the floor below the posters to place items on it.

So Lucy and I decided to help each other out with the model to discuss how to modify it and conceptualise the subsequent build of the model together.

During our discussion, Lucy shared with me that she didn’t understand at first why I had changed the layout of the model. When she started to build a draft of the scene, she realized that as we had designed the 3D Gallery with a square floor, if the posters were too densely packed or spread out, it would affect the player’s experience. After some discussion and consultation with the group, we opted to build a small Gallery space, and Lucy quickly built a model of the scene, which I will continue to enrich with her version of the scene.

Lucy and I shared the work of Marcin’s artist in Sketchfab. We both felt that the gallery model’s design was very similar to the Gallery we wanted to create. The Gallery is very brightly lit from the ceiling, and the light level in the Gallery is very natural, without making the light feel visually uncomfortable. Our Gallery is not very spacious, and this reference light helps us illuminate the posters and create a sense of space. So Lucy and I decided to create a ceiling light installation in the Gallery.

As the Gallery in our current project is only one room, we decided to create glass in the corners around the house to expand the room’s space and illumination visually. This was my initial design idea for the windows.

Lucy drew me schematics to share her ideas. The edge of the window is not a window but consists of two pieces of glass. I thought this was an interesting idea for her. Without the window frame constraints, the corner would look a little more spacious in terms of perspective. We could also apply the material of rough glass, which would preserve the lighting effect of the light coming into the Gallery from outside and reduce the number of outdoor scenes modeled.

So far, we are all going well, and I have created the first draft to pass on to Lucy to look at the layout of the room and the poster’s placement. I have placed models of the characters in the scenes better to compare the scale of the posters to the scenes.

There was an unexpected little situation between us. Lucy thought this was the final model of the scene, and she told me she was going to help me with some changes. I thought she wanted to adjust the poster’s position or layout, but I didn’t expect her to start working on the final model. As we both worked on the next model handover, she told me that she had already extrude and bevel the model. I realised that we had got the message wrong between us. We both found this very interesting, and we will be more careful next time we communicate to make sure we don’t get the wrong message again.

On the 16th of February, I attended a tutorial with WU students in VR, and the tutor suggested that the scale of the posters in our project scenes be reduced a bit. When the player is experiencing the game, a properly scaled poster model can be easily captured by the player’s view. I added lights and light stands to the scene model and modified the posters’ size and position. The game students wanted us to place a table in front of each poster model to hold some decoration items. Given the space available in the Gallery, Lucy and I suggested that only the main tables that would hold the interactive props be placed first. Other tables can be added or subtracted as we see fit. The table model is currently set in a simple style based on the gallery’s design, and we can adjust it according to the actual needs of the project later on.

Simulation:

Model production planning:

Lucy and I have had a perfect time working together. We have been sharing each other’s knowledge and inspiration as we work together. I use screen sharing in my Discord channel, and it’s convenient because we can see what’s going on directly on each other’s computers. Lucy and I shared how to make shapes that fit the design of the picture quickly. Use [Multi-Cut] to cut out the model’s surface and later select the points of the model to modify. Once you have created a flat surface that fits the design, select [Extrude] to extrude the surface and you’re done.

I also shared some knowledge about Substance Painter with Lucy.

Lucy shared the blend shapes function in Maya, which enables the binding of simple models, the Path editor, which helps to modify the path of images in MAYA with one click, and the AlignTool, which helps us to align posters. Basic functions.

Conclusion:

This week we have been creating and modifying. After we finished the basic scene model, we passed it on to our VR majors. This allowed the VR partners to test the scenes. The gaming students then started testing some game settings, and they also said that they could work on the materials and lighting of the model faster with Unity. I think the team project has helped me break the shackles of my past thinking, Lucy and I have been encouraging each other, and we are always sharing ideas and creative thoughts that we might use in modeling. Next week we will start creating the poster model. Thanks to my partners, we will continue to work on the next projects!

Posted in Collaboration Unit | Leave a comment

Previs/ Postvis/ Techvis | Film/ VFX/ Game Animation | Matchmove/ Rotomation/ Motion Capture

Part I:Previs/ Postvis/ Techvis

Using Previs, Postvis and Techvis at the beginning of the shoot helps the director explore various ideas and techniques to shoot the film quickly. Both the camera effects and art departments benefit from the preview’s unique agility and the technical data it provides. Previews can quickly provide simple effects during the editing process, saving time for post-production changes and adjustments. Movements are modeled and 3D animated digitally by artists so that directors can quickly view and test concepts, determine how best to tell the story and make the most of their time.

Previs

Previsualization, or previs, is the process in pre-Previsualization. Modern previs uses digital technology and enables the director to explore, devise, develop and express his vision for his film’s sequences. The advantage of previsualization is that it allows the director, filmmaker or VFX Supervisor to experiment with a variety of staging and art direction options—such as lighting, camera placement and movement, stage direction and editing—without having to incur the cost of actual production.

The production Previs helps develop the action and tell the story, including visualizing the action, shots, and other settings. The production team begins to list and map out the technical knowledge and components needed for the shots and what will be involved in the live-action shots from the previs. The workflow in previs also includes developing more specific character-related work to look at the environment, including the range of movement, fighting style and props used.

Postvis

Postvis is a way of compositing footage set in Previs into a rough 3D environment or filmed real footage before the final effects are created. Postvis allows for a quick preview of the composite visuals. This avoids the time and money spent on repeated and extensive changes during the visual effects production process. Postvis often uses complex 3D tracking programs to simulate the movement of a live camera. Once the perspective is matched in the scene, Postvis enables a preview of the real scene’s impact on the imported digital character simulation. Digital scene extensions and placeholder effects can also added at the Postvis stage. All material added at this stage can be manipulated and adjusted quickly, saving a great deal of time and resources for the subsequent production process.

Techvis

Techvis mainly provides technical solutions to preview work, such as track positions, camera angles, green screen build distances and other issues. When the plot requires the filming of complex movements of special characters, the Techvis staff are called upon to give specific solutions and solutions. Techvis also has a role in modeling scenes and predicting the best position of light and shadows during the shooting time and works in coordination with the VFX and camera departments to achieve the best possible results.

Part IIFilm Animation / VFX Animation / Game Animation

Film Animation

The characters and scenes in film animation are mainly used to serve the plot. Whether it is 2D Film Animation or 3D Film Animation, the screen’s final visual effect must be smooth enough. A character animation that is not smooth enough will create an uncomfortable viewing experience for the audience. This is why an animator’s most important and fundamental skill is to create a character animation that conforms to the laws of motion. In Film Animation, the characters’ movements are exaggerated to ensure that they conform to the laws of motion. In addition to the characters’ characterization, the animator needs to refine the details of the character animation. Film Animation is like a film in that the audience wants to enjoy an interesting and vividly acted story. Unlike Film, Film Animation is more malleable in terms of characters and scenes.

VFX Animation

VFX Animation is the equivalent of combining virtual things with real filmed objects to give a realistic feel to the special effects animation on the screen. Often VFX is used to create real-life, more unusual situations such as explosions, tsunamis, volcanic eruptions and other disaster phenomena. VFX is mainly used to model and post-compose material shot in a green screen environment, using realistic, detailed models instead of character movements. In order to ensure that the audience is treated to VFX Animation that matches the real-life scenes on screen, the colour of the footage is repeatedly adjusted in post-production. This allows the virtual elements added in the post to be better integrated into the real scenes being filmed. Artists still explore VFX Animation in the film and television industry, and it helps to create the mood of the scene and bring the audience into the plot.

Game Animation

The animation is a significant part of a game, helping the game world create a sense of realism in the player’s experience. Animators simulate human movement and expression characteristics onto in-game characters in Game Animations, which serve to bring out the character’s personality. The in-game character models, embellished with animation, provide a platform for emotional and plot interaction between the player and the game character. Game animations help link the game to the integrity of the plot while also enhancing the game’s artistry and aesthetics. Some game creators also apply animation capture and facial expressions to game animation to enhance the game’s artistry and playability. Most of the characters’ movements in Game Animation are cyclical, such as jumping, walking, running, etc. Some of the movements may not conform to normal laws of movement or physics (e.g., a character can reach two stories when jumping). Still, these special character animation settings do not affect the player’s experience of the game. The fact that some of the special features are different from the real world adds to the appeal of the game.

Part IIIMotion Capture with Matchmove and Rotomation

Matchmove:

Matchmove matches computer-generated (CG) scenes with footage from live footage to simulate the reconstruction of cameras, people and backgrounds in a virtual scene. Matchmove producers also visit the set to measure environmental values and set tracking marks. In subsequent Matchmove productions, the tracking markers can track the movement of the camera and calculate the relevant coordinates in the 3D scene. matchmove can use the tracking markers to reproduce the movement of characters, vehicles or other objects in the virtual scene, thus performing human and object tracking. The created motion files (camera, object or body track) are then passed through the VFX pipeline to other departments so that the compositor can seamlessly combine them. The concept of Matchmove revolves around shooting a scene with two different cameras at precise angles. The first is a virtual camera to capture an actor’s movements and expressions who a 3D object will replace. This footage is known as live-action footage. The second is shot with a 3D camera. The 3D camera replicates the real action’s perception or the virtual camera to fit the real scene’s 3D object world.

Rotomation

The rotomation technique traces live-action footage of actors frame by frame to replicate an actor’s motion in animation 2D or 3D. Rotomation or RotoAnim is a combination of rotoscoping and animation, hand-tracking of matched 3d elements, tracked from the background of real footage. Rotation is responsible for manifesting the act of the actor on the 3D model. The model then acquires and exhibits every nuance of the behavior of the actor. The final footage consists of a 3D object in motion or action that has been fed to it by matching the movements of the actor acting on it. Rotation enables the 3D model to inherit the actor’s ability to create a 3D object using the relevant software and technical devices of VFX artists in a shorter period and at cheaper rates.

Motion Capture

A motion capture system is a high technology device used to measure moving objects in three dimensions accurately. It is based on the principles of computer graphics. It uses several video capture devices arranged in space to record the motion of a moving object (tracker) in the form of an image.

Motion capture is a technique for recording information about human movement for analysis and playback. The data captured can be as simple as recording the spatial position of body parts or as complex as recording the face and muscle groups’ subtle movements. Motion capture as applied to computer characters, on the other hand, involves converting the movements of a real person into those of a digital actor. This conversion mapping can be direct, as when a live actor’s arm movements are used to control the arm movements of a digital actor. It can also be indirect, such as using the real actor’s arm and finger movements to control the digital actor’s skin colour and mood. In a performance motion system, the performer is responsible for making various movements and expressions according to the plot, captured and recorded by the motion capture system. The captured movements and expressions are then used to drive the character model through the animation software. The character is able to make the same movements and expressions as the performer and generate the final animation sequence.

The difference between Matchmove and Rotomation

There are two main types of Matchmove tracking, CameraTrack and ObjectTrack.

CameraTrack is required to convert camera information from live footage to digital information for post-production software. Matching live-action with CG elements, camera positions and movement. In film production, for example, the green liner in the studio is replaced in the post with a special effect or scene.

ObjectTrack is a relatively simple object with distance and perspective changes, without much of its deformation. For example, a photograph in a scene, a license plate of a moving vehicle, etc. However, if you want to replace a specific object with a more complex deformation, such as a character’s face, you need to use the Rotomation technique.

Rotomation, which mainly does the digital replacement of characters, includes erasure (elements such as wire, original character) and character performance tracking. In addition to replacing actors, Rotomation is often used in digital make-up.

Posted in Advanced and Experimental 3D Computer Animation | Leave a comment

Houdini Tutorial Week 3

This week we will be attempting to make a destruction effect, and I will be using the wooden house model I made in the first week for a destructive test.

First, we need to understand [Voronoi].

[Voronoi] is sort of the closest thing we found to use to do a bit sort of natural or shapes of destruction. The way it works is basically we scatter a lot of points and then between each two points we draw a line and the line stops where there’s another line between other points. This effect is what we wanted to achieve in the 3D shattering process. The irregularity of the shattering will give a more realistic texture to the effects.

Definition of [Voronoi]:

In mathematics, a Voronoi diagram is a special kind of decomposition of a metric space determined by distances to a specified discrete set of objects in the space, e.g., by a discrete set of points. It is named after Georgy Voronoi, also called a Voronoi tessellation, a Voronoi decomposition, a Dirichlet tessellation (after Lejeune Dirichlet), or a Thiessen Polygon.

In the simplest case, we are given a set of points S in the plane, which are the Voronoi sites. Each site s has a Voronoi cell, also called a Dirichlet cell, V(s) consisting of all points closer to s than to any other site. The segments of the Voronoi diagram are all the points in the plane that are equidistant to the two nearest sites. The Voronoi nodes are the points equidistant to three (or more) sites.

History of [Voronoi]:

Informal use of Voronoi diagrams can be traced back to Descartes in 1644. Dirichlet used 2-dimensional and 3-dimensional Voronoi diagrams in his study of quadratic forms in 1850. British physician John Snow used a Voronoi diagram in 1854 to illustrate how the majority of people who died in the Soho cholera epidemic lived closer to the infected Broad Street pump than to any other water pump.

Voronoi diagrams are named after Russian mathematician Georgy Fedoseevich Voronoi (or Voronoy) who defined and studied the general n-dimensional case in 1908. Voronoi diagrams that are used in geophysics and meteorology to analyse spatially distributed data (such as rainfall measurements) are called Thiessen polygons after American meteorologist Alfred H. Thiessen. In condensed matter physics, such tessellations are also known as Wigner-Seitz unit cells. Voronoi tessellations of the reciprocal lattice of momenta are called Brillouin zones. For general lattices in Lie groups, the cells are simply called fundamental domains. In the case of general metric spaces, the cells are often called metric fundamental polygons.

The first way of breaking:

First, create an [sphere] in the [Geometry] screen. Set the [Primitive Type] to [Polygon] mode. Then set the [Frequency] value to 10.

Create [scatter]. Note that [scatter] should be unchecked by [Relax Iterationtions] so that the surface of our model will have an irregular [voronoifracture] shape in the next step. Create [voronoifracture], which is a new node. This will help us create a [voronoifracture] shape on our model’s surface based on the points in [scatter].

Add the [explodedview] node. With the help of this node, we can observe the irregular fragmentation effect of the current model under the action of the [explodedview] node.

If you want to change the number of broken objects or how much they spread after breaking, we can do this later. The [Force Total Count] value of the [scatter] node controls the number of objects that break. I feel that the principle is based on adding points to the original object. As the points increase on the object’s surface, the number of broken objects increases according to [voronoifracture].

The value of [Uniform Scale] in the [Explodedview] node controls the state (distance) of the object after it has been broken.

Creating [pointsfromvolume] allows you to observe the points of the model. The value of [Jitter Scale] in [pointsfromvolume] can reverse the broken model.

Add the [Merge] node. Adjust some more node parameters (to control the number and shape of broken models).

Create the [vdbfrompolygons] node.

This node can create a distance field (signed (SDF) or unsigned) and/or a density (fog) field.

When you create a fog field you can choose to fill the band of voxels on the surface or (if you have an airtight surface) fill the surface (see the Fill interior parameter). Since VDB primitives only store the voxels around the surface, they can have a much higher effective resolution than creating a traditional volume with IsoOffset. You can connect a VDB to the second input to automatically use that VDB’s orientation and voxel size (see the Reference VDB parameter).

When we uncheck [Distance VDB] in [vdbfrompolygons] and then select [Fog VDB], the state of the model will change.

Create the [isooffset] node. Connect the [isooffset] node directly to the [sphere] node. At this point, you can observe the change in the shape of the object.

The IsoOffset operation builds an implicit function given the input geometry. It then uses the implicit function to create a shell at a fixed offset from the original surface. The tetrahedral mesh mode may be used to create a uniformly sampled array of tetrahedrons for use in simulations. The volume output modes allow the implicit function to be output directly as a volume primitive without further processing.

At this point we delete the [isooffset] node. Under the [vdbfrompolygons] node connect [scatter]. Cancel [Relax Iterations] in [scatter], so that randomly distributed points will appear. Then cancel [merge] to connect the [pointsfromvolume] line. Connect the [scatter] under the [vdbfrompolygons] node to [merge]. Adjust the value of [Force Total Count] in [scatter] under the [vdbfrompolygons] node to adjust the number of broken objects.

Add [remesh] node.

【remesh】:

This node tries to maximize the smallest angle in each triangle. (A “high quality” triangle mesh is one where all angles are as close as possible to 60 degrees.)

This node does two types of remeshing:

Uniform

The node tries to equalize all edge lengths, giving triangles of equal size.

Adaptive

The node uses bigger triangles in broad areas and smaller triangles in detailed areas. This uses allows you to represent the original surface with fewer triangles. However, since edge lengths vary, this mode will have fewer equilateral triangles than Uniform.

Add the [noise] node to adjust the shape of the base model of the object. This will also give the final result a more curved image.

Create [rest] and [attribwrangle] nodes.

【rest】:

This node creates an attribute which causes material textures to stick to surfaces deformed using other operations.

Rest can get the rest position in one of two ways:

1.By reading a file.

2.By attaching a second input.

The first input is the deforming geometry. The second input (or file) is the rest position data, which is typically a static, non-deforming surface. If no second input or rest file is provided the first input will be used for the rest values. This is useful when the rest SOP is used before deformation in the pipeline.

The topologies of the two geometries should match.

All primitives support the ‘rest’ attribute, but, in the case of quadric primitives (circle, tube, sphere and metaball primitives), the rest position is only translational. This means that rest normals will not work correctly for these primitive types either.

The ‘rest’ attribute is exported to RenderMan as the ‘Pr’ attribute. This is output for all surfaces as a ‘vertex float attribute’.

Rest normals are required if feathering is used on polygons and meshes in Mantra. NURBs/Beziers will use the rest position to compute the correct resting normals.

[attribwrangle]:

Final node diagram for the first crushing method:

Select [Piece Prefix] in the [voronoifracture] node. It’s showing us all the polygons that have the same name or the other of the primitives.

The second type of fragmentation:

Add [scatter] to the original nodes (the value of [Force Total Count] is turned down), add [grid] and [copytopoints].

[grid]:

The plane can be a mesh, Bezier and NURBS surfaces, or multiple lines using open polygons.

[copytopoints]:

Add [attribrandomize] node, (remember to change the attribute in [Distribution] to [Direction or Orientation].)

[attribrandomize]:

This node generates random values to create or modify an attribute.

Turn on the Visualize as Points toggle to preview the probability distribution as a generated point cloud whose positions are drawn from the distribution.

At this point, we need to increase the [size] value of [grid]. This will also make the broken plane of the object larger.

Add [booleanfracture] and [explodedview]. This will form the shape of the model after a fracture.

[booleanfracture]:

This SOP fractures the input mesh using one or more cutting surfaces. Similar to Voronoi Fracture, this is a higher-level node (based on the Boolean SOP) that handles common fracturing-related tasks such as naming pieces, recomputing normals, and building constraints between adjacent pieces.

[explodedview]:

This operation pushes selected geometry out from the center. It does so piece-by-piece to create an exploded view of the geometry. This can be very useful in visualizing how fractured geometry was broken up.

When we want to change the degree of fragmentation of an object, we need to adjust the [Uniform Scale] in the [Explodedview] node.

Add the [attribnoise] node and adjust the impact values for [attribnoise] and [grid]. This will add more detail to the brokenness of the model.

Add [divide] node.

Final node diagram for the second crushing method:

Below we are going to create models that have the texture of broken wood.

The initial node is the same as before. At this point, we can see that the model is broken and looks like stone.

Create the [transform] node, then right mouse click on [Actions] in [transform], then select [Create Reference Copy].

Right mouse button to cancel [Invert Transformation channel] in [Transform Reference Copy]. Then select [Invert Transformation] again.

When we modify the scale of [transform] and uniform scale of [explodedview], we can change the degree to which the wood breaks.

The third type of fracture:

Create [rbdmaterialfracture].

[rbdmaterialfracture]:

Adjusting the value of [Scatter Points] in [rbdmaterialfracture] increases the number of fragments. And we can see that the third form of fragmentation is more natural.

You can adjust [Detail] in [rbdmaterialfracture] to add fracture detail to the model. I have adjusted [Edge Detail] and [Interior Detail].

Okay, let’s get started with some simulations.

First create the [sphere] ([Polygon] mode), [null] and [dopnet]. Increase the y-axis value of [sphere] so that [sphere’] is off the ‘ground’.

[dopnet]:

The DOP Network Object contains a DOP simulation. Simulations can consist of any number of simulation objects (which do not correspond to Objects that appear in /obj). These simulation objects are built and controlled by the DOP nodes contained in this node. Simulation-wide controls are provided on this node, such as caching options and time step controls. The entire simulation can also be transformed using the Transform parameters of this node. Transforming a simulation at this level does not affect the simulation at all. The Transform is applied after the simulation occurs.

Double click on the left mouse button to enter the [dopnet] node.

[rigidbodysolver]:

The RBD Solver DOP sets objects to use the Rigid Body Dynamics solver. If an object has this DOP as its Solver subdata, it will evolve itself as an RBD Object. This solver is a union of two different rigid body engines, the RBD engine and the Bullet engine. The RBD engine uses volumes and is useful for complicated, deforming, stacked, geometry. The Bullet engine offers simpler collision shapes and is suitable for fast, large-scale simulations.

[rbdsolver]:

[bulletrbdsolver]:

The Bullet Solver DOP sets objects to use the Bullet Dynamics solver. This solver can use simplified representation of the objects, such as boxes or spheres, or a composite of these simple shapes to make-up a more complex shape. This solver can use arbitrary convex shapes based on the geometry points of the object, and can also collide objects against affectors that are cloth, solid, or wire objects.

We just need to use [bulletrbdsolver].

Create [rbdobject] and change the object of [SOP Path].

Create [gravity] and [groundplane]. This is equivalent to giving the model gravity and a platform (the ground).

Adjusting [physical] in [rbdobject] and [groundplane] can change the state of the ball when it hits the ground (bounce on fall). [Initial State] in [rbdobject] changes the distance and state of the tumble after landing.

Now start creating the effect of dropping the broken cubes to the ground. Start by connecting the nodes of the previously broken cubes.

Add [assemble] node.

Final:

The [dopimport] and [transformpieces] can help make the model crushing process faster.

Next, we need to use Constraints.

Making models that only break when they fall and touch the ground.

The process of making a house fracture begins below.

Create [rbdmaterialfracture] under the house node of the simple cabin.

Then create [rbdbulletsolver].

Create [sphere] and use [Alt + Left Mouse Button] to create a keyframe animation. (Ball hits cabin)

Link the ball animation to [rbdbulletsolver].

Change the properties of [rbdbulletsolver].

The effect of the ball hitting the house at this point is not good. The house has been hit by the ball away from the initial spot, which looks very unnatural.

Let’s add some nodes.

Posted in Houdini | Leave a comment

Week2: Designing Game Level and Art Style

During the first week of collaborative project discussions, we identified the theme of creating a VR version of the Gallery and installing an interactive poster exhibition in it. This week our collaborative group started discussing the exact design of the project. We had many group discussions and all of us provided a lot of interesting design ideas and production ideas.

Our main tasks for this week were

1. Define the game mechanics

2. Define the style of Cubism game

3. Define the initial game level design

4. Summary of this week

Define the game mechanics:

Before we started working on the game and the models, we discussed our game mechanics ideas. It was agreed that the Cubism game would be a useful reference for us and that we could combine the models in a jigsaw pattern, similar to the Cubism game. Our project will design models and game mechanics that will allow players to play puzzles and interact with the posters installed in the Gallery once they have completed the puzzle. The idea is that the player can enjoy the game and at the same time experience and appreciate Tom Eckersley’s posters in a fun way.

 The style of Cubism game:

Determine the architectural style of the game scene:

We chose three different sets of reference atmosphere drawings in the group discussion in different styles for the final discussion in the group discussion.

The first picture looks a little more retro and the overall architectural style is on the flamboyant side. Our pictures are a little more modern, so the scene’s first reference picture is not particularly suitable for our project.

The second image is a little more modern. It fits in well with the style of our current poster. The gallery’s simplicity also helps the player view the installation poster more quickly in a brightly lit scene. We all ended up choosing this image as our initial stylistic reference, but of course, we will need to discuss more details in the subsequent modeling.

The third image looks a little eerie. While the overall scene is also very modern, our theme is that we want players to enjoy exploring the interactive posters in the VR Gallery and this scene doesn’t give players a laid-back feel. So, we didn’t want to choose this scene style.

The games students wanted us to use Lowpoly to build the models, and Lucy and I considered it’s not challenging to implement simple models in MAYA. We will also be exploring new techniques and knowledge as much as possible during the rest of the building process. We think we can create similar effects quickly in C4D. Lucy and I will try to create some interesting models using C4D.

We have researched some reference images of scenes and props from Pinterest and Google. The main style is still a more simple architectural model.

Our group also decided on a poster that we wanted to work on during our meeting this week. During the discussion, the group members from 3D Computer Animation, Game Design, and VR provided ideas and designs from different disciplines.

I was very interested in the poster of the woman with the orange. My idea for the game was that the oranges in the scene would randomly fall and the woman would pick them up and put them in the basket. The partners also found the idea interesting and we will continue to refine the game setting and the model setting in subsequent collaborative projects.

The games student also showed us his previous assignments and told us that they had ready-made code that we could use straight away. We will continue to explore this during the rest of the production process. We discussed game design ideas for several posters in our group discussions, and you can see the richness of our design ideas during the talk shown in the diagram below.

Lucy also came up with an interesting design idea. She wanted to animate a poster about a giant panda raising a glass of wine in its hand.

The rest of the group also provided two posters related to the wine glass element and we thought we could use these three posters for interaction.

The most important thing in a collaborative project is the communication between the group members. Fortunately, we all participated actively in the discussions and provided different ideas and perspectives. We were all very respectful of each other’s ideas and we all learned different design ideas during the discussions.

This is our initial Art Gallery layout and the poster in this sketch is what we have decided to create so far.

After a group discussion, we decided that some adjustments were needed. In the original concept, we wanted the three posters with the wine glasses to be interactive, but the posters are quite far apart in this draft. We decided to relocate the bottle poster and create a set of notice boards about Tom Eckersley in its place. We also wanted the player to see this [Tom Eckersley] introduction when they enter the game, which is the initial point of the game.

The diagram below is a picture of how we all organised our concepts for the gameplay. The game students provided a lot of design ideas and helped us sort out the simple game logic.

Initial game level design:

1. [Fish]: Players control the props in the game to cut the Fish. The Fish will cut horizontally from where the player’s props touch.

2. [Target]: the apple target is modeled as a three-dimensional hemispherical shape. The middle circle is made into a cylinder for the player to assemble as a three-dimensional puzzle.

3. [Orange Lady]: Use the basket or hands to catch the randomly appearing oranges.

4,5,6. [Silhouette] & [Wine Bottle] & [Panda]: Add different objects to the wine bottle and then take it out of the poster to pour it to the other two posters.

7,8. [Monkey] & [Helmet]: Pick up leaves from the monkey poster and decorate them onto the Helmet. The Monkey will hide in the leaves, so you cannot pick up the leaves when the Monkey appears. Leaf picking is only allowed when the Monkey is hiding.

9. [Exercise]: The player repeatedly stretches the model’s arm, causing the text spring held by the model to compress and release.

10. [Bow and Arrow]: Players can play archery games.

11. [Picking Up]: The player can control the model to bend down and pick up items on the ground.

12. [Piano]: Collecting the scattered keys to install on the cat’s body can then play different rhythms.

Of course, each game is a basic game idea and during the production process, we will adjust it according to the needs of the project and the actual technology.

Summary of this week:

We decided on the architectural style of the posters and scenes we wanted to create. We also did a preliminary game-level design for the interactive posters. Our team members got to know each other a lot better during the project discussions this week and became more familiar with working as a team. It was amazing how everyone came up with different IDEAS in the process of conceptualizing the design. I also learned a lot about the gaming and VR majors’ design concepts, and I’m looking forward to this collaborative project. We also help each other refine other group members’ design concepts during our discussions. As we all have a lot of classes and assignments during the week, we are also trying our best to coordinate our time with each other for the collaborative project. But I’m sure we will all learn something new from this collaborative project!

Posted in Collaboration Unit | Leave a comment

Review and Reflection

Learning and Growth:

I think Term1 helped me get into the habit of looking at References. Before I came to UAL, my search for References was not comprehensive, and the scope of the References was not complete. However, because of the Blog’s creation, I needed to find a lot of information to complete the prep work, so I started researching References in a more planned way. Whenever I was animating, I would go to the web and look for keyframe drawings similar to my designs, and I would also be looking for a lot of reference videos. This also helped me understand a lot of patterns of animation movements.

I practiced making and modifying animations in [3d Animation Fundamentals]. [Ball Tailed and Ball Walker], [Human walk], [Rigging], [Stylized Walk], [Stylized Walk Feedback & Rending Farm], [Body Mechanics], [Phonemes] have helped me systematically to learn the basics of animation. My initial character animations were a little stiff in the waist and back, but this has improved a lot. A lot of practice has helped me improve my animation process.

The course [Design For Narrative, Structure, Film Language and Animation] includes:[The History of Film, Animation, VFX and Film Language],[The Relationship between Politics and Media],[Visual Culture],[Story Circle, Story Arc, Type of Characters and Archetypes],[Characters and Story Development in Three Films]. These lessons helped me learn more about the animation field. While I’ve been writing this part of the blog, I’ve been researching the number of references it’s organizing, and the overview has improved my knowledge. During my organizing process, I have gradually developed some systematic concepts about animation fundamentals in my mind.

I also learned about 3DE tracking technology this semester. I was able to use camera tracking to create special effects on the video being filmed. I’ve had many problems with 3DE, but fortunately, through a lot of replay practice, I’ve been able to understand better and memorize 3DE production.

I also used the rendering farm on the school computer via remote control software. Although there were times when the render farm gave me one or two wrong renderings, this added to my experience of dealing with the problem of rendering. And the use of the render farm helped me save a lot of time. I usually test the rendering of one or two images on my computer and then use the render farm to render them. The most helpful thing is that I can use the render farm to do my rendering work while using my computer to do other tasks.

I think the character animations I’ve produced so far are a lot more rhythmic, and the fluidity of the character movements has improved compared to my undergraduate work.

Still need to improve:

I still have some shortcomings in the animation of the movement of characters. It is necessary to adjust further and learn the consistency and balance of the movement of the characters. My teacher’s advice is to watch more live-action videos and summarize the patterns of characters’ movements. The character animation will be more fluid and natural in this way. As I collected reference videos, I gained a deeper understanding of the patterns of characters’ movements. I’m also trying to think about ways to enrich the details of the model’s movements. My imagination is still not rich enough, and I will keep watching many materials to enrich the design of the character movement.

Research direction in the future:

As I described my dream when I applied to UAL, I would like to become a university teacher in the future. After Term1, I think I have a deeper understanding of how to learn about animation systematically. But I still have a lot of knowledge that I need to learn, and I realized that I have so much software and knowledge needed for research and practice. At the moment, I always learn from my teachers and classmates in our class, constantly learning from the new knowledge and production techniques. I want to try to learn and practice more about the animation field. In addition to the animation process itself, I believe that studying animation in conjunction with other disciplines (VR, games, special effects, film, etc.) will also help me expand my research into animation’s practical applications.

Posted in Advanced and Experimental 3D Computer Animation | Leave a comment