We use third party cookies in order to personalise your site experience.

From AI generated image to a Gravity Sketch concept model: James Robbins’ experimental workflow

James Robbins is lead surface designer at Rivian. He recently gave the Gravity Sketch community a look at a proof of concept process he was developing to integrate AI into his workflow and speed up the 3D process, while maintaining design intent. It serves as a starting point for combining AI and VR tools to achieve this goal, and perhaps encourage others to explore and build on the process.

Gravity Sketch
Like 4 Facebook Twitter

Tool flow

Midjourney to Hugging faces to Blender to Photoshop to Gravity Sketch

Midjourney to Hugging faces to Blender to Photoshop to Gravtiy Sketch

1. Generate a 3D model from an AI image

James generated several images using Midjourney before selecting one and heading to Zoedepth in Hugging Faces. He uses the “Image to 3D” option to convert the 2D image into a 3D mesh along with a depth map. The conversion process may adjust the colors due to different gamma settings, but these can be amended later.

2. Import to Blender

He imports the .gltf / .glb file into Blender. The object appears as a shaded mesh at first due to the vertex color attribute present in the mesh file, which needs to be turned on under “Attribute” in the “Shading” menu. Additionally, he uses the “Flat” setting to eliminate any other shading in the scene.

The object displays as distorted with flat areas due to its extraction from a single viewpoint, and this is where manual work becomes necessary for sculpting. He aligns the model in the space by selecting a feature on the object and using that as a guide point to align, and introduces a plane down the center of the helmet to assist with symmetry. While performing these adjustments, he prefers switching to “Edit” mode and selecting and deleting the background vertices.

3. Sculpting

Any of the side elements that won’t be of use to the final model are also removed. He then starts sculpting, mainly using the “Grab” tool with a larger radius to start manipulating the shape. Adjustments are made based on his observations, as certain areas need to be rounder and pulled out more.

Throughout this process, he remains mindful of the center plane and the general alignment. The goal is to eliminate any distortion and achieve a pleasing form on one side of the helmet. Once satisfied with the initial sculpt, he switches back to “Edit” mode and performs a box selection of all the vertices on the side opposite the symmetry line, and deletes them. With the unwanted vertices removed, he adds a mirror modifier in “Object” mode to generate symmetry. He then returns to sculpting. Any slight creases or strange lines are ignored as he plans to create his own topology later to overlay on the model, but it’s personal preference as to how perfected it needs to be. This serves as a starting point for manipulating and shaping the geometry to suit personal needs.

4. UV mapping

With the gamma appearing much brighter on the vertex color compared to the rendered image, he UV maps the model and bakes the vertex colors into an actual image texture. This process allows for exporting the model to software that may not support vertex coloring and facilitates decimation to reduce the mesh density.

Settings

  • Scene render settings > Cycles CPU device: selected
  • Bake options > Diffuse: selected
  • Direct and indirect: unchecked

Adjust the pixel margin to define the bounds of the UV islands. Now, in the shading workspace, add an image texture node using Shift+A and connect it to the diffuse BSDF node. Create a new image texture and name it.

  • Resolution: 2k

Bake

Ensuring the texture is present, he switches to “Edit” mode and projects the UVs from a side view using “Project From View” and “Bounds.” The UV space should display the named texture. He returns to the modeling view and with the object selected, clicks on “Bake.” Depending on your hardware, this process may take a few minutes. Once complete, he saves the baked image.

5. Adjust for export

In Photoshop or similar, he opens the baked image and adjusts the exposure and gamma to match the original image’s saturation and contrast, and saves the modified image.

Back in Blender, he deletes the color attribute in the mesh, causing the model to turn white and  replaces it with the baked image by connecting it to the image texture node, ensuring the “texture” option is enabled. Now the model displays higher contrast and saturation.

Another benefit of this process is the ability to decimate the model. James adds a decimation modifier and sets the ratio to a desired value (e.g., 0.1) to reduce mesh density. He applies the modifiers before exporting the model as a usable UV-mapped and textured mesh for use in other software.

6. Import into Gravity Sketch

He imports the exported FBX file into Gravity Sketch. You can bring the file into any software, but James has a personal preference for Gravity Sketch to examine proportions, volumes, and design new elements at scale in VR. He also imports a scan of his head and adjusts its position and scale to fit inside the helmet.

This step helps him accurately compare the helmet’s size and proportions to an actual human factor. It’s a good way to get a better understanding of the design. He then starts building the geometry and sketching over the model, adding details to the rear section of the helmet, so the design becomes closer to a final concept.

Like 4 Facebook Twitter