3.2. Rendering scenes

Similarly to what happened in two dimensions, once the scene is populated, and the objects in place, you may want to use it for several purposes:

  • Generate images: take snapshots of the scene , producing images to a file or the screen.
  • Generate some derived documents: i.e. reports with snapshots, foldouts and/or blueprints from objects in the scene.
  • Make a movie at that scene, moving some objects around or changing view points or the objects themselves.
  • Do some calculations/simulations: Finally you can use the scene to perform some calculations on it, including simulations.
  • Convert to other 3D formats.

We’ll be dealing with the first option here. Later chapters are devoted to the other options, in particular:

The process of producing a 2D image out of a three-dimensional model is called rendering, and the algorithms that produce that result, renderers. There are scores of rendering algorithms, that can be grouped in two lines: BREP renderers, and ray-casting renderings.

FILLME explanation

As a starter, let’s render an scene straight away:

scene s: .loremipsum spheres   # this .loremipsum stuff creates
                               # a random scene with spheres
render s: .filename myfile.png .width 400px

This works, and produce a .png file that shows several spheres (created by the automatic generator loremipsum).

What is going on here? under the apparent simplicity of this code, there is a long sequence of operations being applied to the scene. In this case defaults are provided, and the user is not aware of all the steps, but those steps are there and can be tailored to specific needs.

The full pipeline to generate an image from a scene is the following:

  1. Set up the active camera for the scene. Position, target, aperture, filters can be applied.

  2. A renderer generates a raster or vectorial image of the scene, from the previously defined point of view. The result, in case of the vectorial mode, is a list of 2d-primitives: lines, points, and areas, with some 3D info still attached to them. For raster, the result is a pixel image.

  3. A post-processor then goes through all those 2D primitives generated, possibly changing/deleting them

  4. If a raster image is needed, the following steps are taken now:

    1. Rasterize it using a shader, returning an array of pixel values
    2. Pass a raster post-processor (pixel shader) that can change each pixel’s value.
  5. Finally, an exporter takes the image (raster or vectorial) and converts it to the required standard file format (png, gif, svg, pdf, ps…):

In this case, from an object (the cube) a suitable default scene is generated and then rendered as described above.

In the previous examples, most of those rendering steps have happened in the shadows, using operators created by default. But they can be explicitly created and tuned as the following example shows:

Scene scene1: .blahblah cubes

# 1) define the camera we want
Camera c: .from 2,2,2 .target 0 0 0 .aperture 23 .up 1 0 0
scene1.setCamera c

# 2) define a renderer
VectorialRenderer r: .style wireframe

# 3) use a postprocessor to emulate hand-drawing
r.postprocessor = HandDrawn :  .shakiness 40% : .size 20cm

image1 = r.render scene1:

# 4) now, we have an image. We can save it or use it
image1.save file1.png: .format png  # this is redundant, as
                                    # the extension tells it

In this example is shown that, using a renderer and postprocessor objects (as well as the image generated), the steps of the pipeline can be customized. Below is an explanation of how those operations work and what can be customized on them.

Note that you can resort on those defaults and then changing details of them:

scene s: .blahblah
s.renderer.style solid
s.render: filename myImage.png .width 400px

Note, too, that any simple object can be rendered directly, perhaps to check its characteristics. When the render function is called on an object, an scene will be created for it that spans all their volume and a camera will be set up accordingly (additionally, if nothing else specified, all the render steps will proceed according to default values). This allow rendering as simple as this one:

cube c: .side 40cm .material wood
c.render: .filename mycube.pdf .pageSize A4 .fit width

In this example, a scene will be setup for c, a camera put inside so it covers the entire cube and then, a renderer is created that produces the file.

This feature is even more important if the .debug option of the object is set. Whith this flag, the object will display all its component and parameters, so its useful for xxxxxxxxx.

3.2.1. Saving , displaying and printing

Up to this point, we have created an image. Anableps provides different commands to deal with those images: save will create a file, while display will open a window to display it. print goes directly to the printer:

scene s: .blahblah .loremipsum  # putting two blahs or
                                # loremipsum generates random
                                # scenes
print s: .width 400  .name file1.png
display s: .width 400 .renderer..alpha 0.9 ..postprocessing

3.2.1.1. Some parameters for the display function

  • gamma: applies gamma correction
  • width: sets the width of the window. Note that width and height cannot be specified at the same time
  • height: sets the height of the window. Accordingly, width and height can NOT be set at the same time
  • controls: controls what do appear in the window (work on me)

See here the full reference of display.

3.2.1.2. Parameters for save

  • format: gives the format for the file. If no format is given, format will be taken from the name. If nothing can be inferred, .png is the basic default.
  • width: sets the width of the window. Note that width and height cannot be specified at the same time
  • height: sets the height of the window. Accordingly, width and height can NOT be set at the same time
  • permissions: allows setting permissions???

See here the full reference of save.

3.2.2. Exporting and external rendering:

Anableps can also convert the scene to other formats, so it can be rendered with external tools.

They are really renderers, they are included here. They just to take the code generated and save it a directory (or a zip file) in order to run those applications later on:

scene s: .loremipsum cubes
export s: .filename file1.zip .format pov
--

There are several formats

3.2.2.1. PovRay

3.2.2.2. Step

format 2014:

Example::
scene1.save file1.step .for,

Parameters:

  • colors : boolean value

3.2.2.3. Igs

Parameters:
  • a: ???

3.2.2.4. wrml

Parameters:
  • a: ???

3.2.2.5. renderman

3.2.3. Importing

In the same way, scenes can be imported from a variety of formats. To import an scene use load::
s = load myscene.step

The loading can be somewhat configured depending on the format.

3.2.3.1. Common parameters

  • .onError: defines what to do in case of an error. One of ignore, fix, fail, warning. if ignore is chosen, the importer will do its best to keep importing whatever it can.
  • asdfa:

3.2.3.2. Step files

asdf

3.2.3.3. wrml

asdf

3.2.4. Converting

Given that there are importers and exporters, converting among 3d formats is straightforward:

bleps convert file.step file.igs

This will do for all formats.

convert accepts parameters both from importing and exporting. For example, when converting from step to igs:

convert file1.step .to file1.wrml .colors off .on

Contains parameters pertaining to the step importing and to the wrml exporter, as well as common ones (onError)