WEEK 5/6 - Real Time Rendering and Facial Capture
- Genevieve Myhan
- Mar 5, 2020
- 4 min read
Updated: Mar 7, 2020
Real Time Rendering in Unreal-
Firstly some background research into real time rendering (Focusing on Unreal). The two main areas of thought in real time rendering is ray tracing (See Unreal engines ‘Reflections’ Demo 2018 https://youtu.be/lMSuGoYcT3s ) vs rasterization (See Unreal Engines ‘Siren’ 2018' https://youtu.be/9owTAISsvwk ,a combination of real time rendering and motion capture performance).
Something I had never come across before is Rasterization, the way I understand it is that an image is taken which has dimensions in the three axis X, Y, Z, (or vectors) and then extruded in these axis to create a 3D mesh. This 3D mesh is referred to as a raster image, which is essentially a 3D collection of pixels. ( https://youtu.be/9idWBQUQK-s for more thorough explanation) Rasterization is known to be faster than ray-tracing for real time rendering.
Examples of usage of real time rendering varies a lot. Video games were one of the only areas of media real time rendering was initially seen, however now it can be seen used in so many different creative industries, from film production to car design.
Scene Tests-
Knowing a little bit more about how real time rendering works in unreal I am going to test a few different ideas I have regarding Fenrir’s texturing.
Firstly I’ve set up a really basic scene with a camera in Unreal. Unfortunately when I rebuild lighting the Mesh turns Completely black. I Think this is most likely because I don't have any UV’s on here yet.
Import Dialogue:
📷
This Means I am going to have to rebuild my low poly model now in order to bake UV’s down onto the new mesh.
Test mesh -
I want to see my textures in unreal so i'm going to make a quick test mesh
High Poly in Z brush:
📷
Low poly UV’d
📷
Fenrir Quick ReDo -
Firstly I imported the new mesh (with slightly less details and the new correct shape) into the maya file and aligned it with the low poly block out.
📷
Once in place I made this new High poly version live and smoothed the mesh, meaning the old low poly jumped to the positioning of the new high poly mesh.
Before:
📷
After:
📷
This method works in a way, but you can really see the huge amount of errors that have occurred. I am going to have to re-topo this by hand and re-do the UV’s.
On my Schedule this puts me back by a few days as a re-done low poly model means that the rigging I have already done will be unusable. I am going to have to work really hard to catch up this week.
I have fully re-done the base mesh and added a lot more detailing in the topology. Especially around the neck. I have used the full high poly mesh as my live surface so as to better get all the forms I need.
Retopo and new UV’s:
📷
Low-Poly (But high enough for good fidelity in the face), without the smoothing.
Topology:
📷
📷
In unreal:
📷
Left: Baked Bump Maps
Right: Imported High Poly Model
Re-doing the base mesh topology has worked great! The detailing stands out well and in a way even adds to the 2D feel I am trying to go for. Next I need to fully texture the model which I am firstly going to try using Zbrush.
Facial Capture-
For the Facial capture section of my work I have been testing out Faceware.
Performance Capture requires the use of a mounted camera to capture an actors facial data on set. Unfortunately at this time the university was not able to provide this piece of equipment, so we had to improvise. With the help of my classmate Ryan, who also needed facial capture data for his Performance piece we created a makeshift head mount with the aim to stream data into Faceware using either a Go-Pro or ordinary phone Camera.
Firstly, Ryan found some DIY tutorials we could base the design of the headpiece on. We were limited to what was available to us on a very low budget so in the end our tools consisted of:
- A Pack of Large Zip-Ties (To hold the pieces together)
- Two small garden Plant Frames (To be used as arms to attach the camera to)
- A Small BirdFeeder (To hold the camera)
- A bike Helmet (As the Base)
- Two small reading lights (To light the face)
- Small Counter Weight attached to the helmet (Borrowed)
- GoPro/Phone (Borrowed)
Altogether costing around £20
Firstly we attached the arms to the helmet to test the positioning of the camera, the usefulness of the bird cage is that any small phone could easily fit inside.
📷📷
📷📷
For the actual shoot we hired out a Go-Pro and used the app to stream the data onto a phone in order to easily see that the data we were capturing was correct.
📷
Finally attaching the camera in place for Connors Shoot (The first Test), adding markers onto the helmet to complete the marker set for the VICON system to work.
📷📷
📷
On this first test we learned that:
- The Camera was not stable enough.
- The camera was not high enough to capture the face completely and would slip out of place due to the cage not holding it securely enough.
- The audio was too muffled and crackly due to the cage again.
- The actor was severely limited in mobility.
For the next build these are all things we should take into consideration. The new idea is to make the mount smaller by using a go-pro mount. As Well as introducing another set of arms to hold the mount more securely.
Comments