Porting a shader

From Unify Community Wiki
(Redirected from ShaderPortingNotes)
Jump to: navigation, search

These instructions will help you port shaders from one program to another--between FX composer, RenderMonkey, Mental Mill, ShaderFX, and Unity3d. Porting shaders is a manual process. But most of the work is just copy paste. And there are some slight semantic differences on each platform.

Contents

Shader Applications

There are several applications that facilitate shader development and prototyping:

  • ShaderFX - A max plugin capable of exporting to FX composer. Similar to Mental Mill to be used by artists.
  • FX Composer - NVIDIA's shader development application
  • Mental Mill - An artist shader development application bundled with FX Composer similar to ShaderFX, but in a free flavor.
  • RenderMonkey - ATI's shader development application
  • Unity - Our target shader platform

Getting Shader Help

Resources

  • Google search for DxMaterial_Effect_format.htm which has information on semmantics for porting between 3ds Max and Fx Composer

Shader Porting Practice

I found it helpful to first port the shader to FX composer. When comparing your ported shader it makes a good base to compare to. Before porting to Unity, I attempted a port to RenderMonkey first. That way I could identify all the correct semantics to use. And lastly I ported to Unity while checking the docs and chat for the Unity semantics.


RenderMonkey has a StreamMapping that controls the mapping of semantics into your vertex and shader programs. You likely need to double click the stream mapping outside of the pass and provide Normal, Tangent, and Binormal information. They should all map to index 0. And use names like rmTangent, rmBinormal, and rmNormal.


Note, when using the vertex and fragment programs, the Unity ShaderLab binding parameters are created for you. So you don't want to provide that redudant/wrong information in your shader.


Vertex and Fragment programs likely have you define input and output structures. It's recommended that you use your vertex program output structure also for the fragment program input. Or else bad things could happen.

Texture Editing

  • A lot of time is spent preparing your art for use by shaders. The common texture editing programs are Adobe Photoshop, Paint.net, and Gimp.
  • When painting alpha channels on images, it is sometimes helpful to put a white, grey, or black texture beneath the channel your are painting to make it stand out more.
  • Graphics card bit precision - depending on your graphics hardware the 8-bits per channel of 32-bit RGBA textures could be limited to 3 bits.

Images

  • JPG file formats can add compression marks to your shader and cause visual artifacts watch out.
  • PNG supports alpha and lossless compression.

Using Gimp

Palettes

  • Adobe has an extension to export your Gimp GPL files to Adobe color swatches.

Alpha Masks

  • Gimp is very handy for editing the alpha layer of your PNG image.
  • To add a new alpha channel: Right click a layer->Add Layer Mask->White
  • To edit an existing alpha channel: Right click a layer->Add Layer Mask->Transfer layer's alpha channel
  • In your layer dialog a second image will show the alpha mask. Click the alpha box to edit the alpha layer. You can then paint on the layer directly with the selection and paint tools.
  • In some cases you may want to turn off aliasing when you want to paint and fill with a specific color without any interpolation.
  • When editing the alpha mask is complete: Right click a layer->Apply Layer Mask

Using FX Composer

When using FX Composer you must first create a project.


Occasionally the FX Composer default layout gets messed up. So you'll want to restore the FX composer layout with View->Layouts->Reset layout.


In the materials pane, right click and select Add Material From File. Select your FX file (exported from ShaderFX perhaps). Then create some geometry to test the material. Create->Teapot. Then Drag your material from the material window to the RenderWindow and drop it on the Teapot.

Double click a material and click the Editor tab in the top middle pane to see the code.

FX composer FX files have their UI elements and shaders code in the same file. They still use the "uniform" keyword to link the UI variables to the shader code.

FX composer vertex programs and fragment shaders can use functions that you only have to define once which is nice.

FX composer can define uniform input parameters on the vertex and fragment program entry functions.

Using RenderMonkey

RenderMonkey uses a single XML file to store the project and shaders which is swell.

You may have to use a lot of code that is copied and pasted several times if you have multiple passes that use the same functions. That's because you have to redefine the same functions each time you use it in a vertex program or fragment program.

RenderMonkey cannot define uniform input parameters on the entry functions. Instead place your uniform parameter definitions above your entry function.

RenderMonkey requires that you define variables in the IDE for them to be detected as uniform parameters. This includes matrices like modelViewProjection etc.

Functions have to be defined before they can be used. You may have the function defined, but need to swap the order.

RenderMonkey disassembly can be compared against the disassembly generated in the player log and the disassembly shown in the inspector when your shader is selected.

Texture filtering can add seams or unexpected side effects to your shader. Double click the texture item inside your pass to open the texture state editor. Or remove the following items from the rfx file in a text editor:

     <RmState NAME="D3DSAMP_MINFILTER" API="D3D" STATE="6" VALUE="2" USAGE="SAMPLER_STATE"/>
     <RmState NAME="D3DSAMP_MAGFILTER" API="D3D" STATE="5" VALUE="2" USAGE="SAMPLER_STATE"/>

Using Mental Mill

A very amzing thread about it, and we all pray for an exporter of mental mill, that eliminate the limitation of artist!

Using Unity

Unity is very similar to using RenderMonkey in that functions need to be redefined in each pass.

Unity 2.1 supports Shader Model 3 but applies both DirectX and OpenGL constraints. The next version is planned to remove this constraint.


Including UnityCG.cginc provides some useful variables. Like the camera eye position in object space. You'll notice a lot of shaders calculate this value by multiplying some matrices that don't seem to be available in Unity, currently. Those operations can be optimized out.

Consult the manual for more built-in variable documentation.

For uniform properties to be detected, they must first be defined in the properties section. Creating a new shader prepopulates some properties to see how this is done.

Porting Related Useful Forum Posts

  • Successfully ported a relief shader from ShaderFX to Unity. post
  • I read the docs. I want more docs. post

See also

Personal tools
Namespaces

Variants
Actions
Navigation
Extras
Tools