


11.1 Introduction to 3D Surface Mapping & Quick Start
11.2 Parametric Representation of 3D Surfaces
11.3 Specifying Parametric Equations via UV Object
11.4 Surface Visualization
11.5 Shading
11.6 Putting It All Together
11.7 Handling Rectangular Shapes
11.8 Summary of UV Properties and Methods
As of Version 2.8, AspJpeg is capable of displaying an image mapped to an arbitrary 3D surface
such as a sphere, cylinder, cone (or a portion thereof) or any other mathematically defined surface.
This feature is useful for creating the images of promotional items such as coffee mugs, baseball caps, pens, etc.
with custom logos or photographs wrapped around them in a realistic way.
[Pic. 1: Logo on a Baseball Cap]
[Pic. 2a: Logo on a Pen with Shading]
[Pic. 2b: Image of a Painting on a Coffee Mug with Shading]
The new functionality is implemented via a new Canvas method, DrawImageUV, which accepts a single argument: another instance of the ASPJpeg object
containing the image to be mapped to a 3D surface. The surface equations as well as numerous other parameters
controlling the mapping process are specified via the properties and methods of the new UV object
accessible via the Canvas.UV property. The letter combination "UV" refers to the traditional names (U, V)
of the parameters in the three parametric equations defining an arbitrary 3D surface. The UV object is described below in great detail.
The above picture of a logo wrapping around the spherical shape of a baseball cap (Pic. 1) is created
via the following code snippet (the Pic. 2a script is provided in Section 11.5  Shading
and Pic. 2b script in Section 11.6  Putting It All Together):
VB Script:
' Open canvas image
Set Jpeg = Server.CreateObject("Persits.Jpeg")
Jpeg.Open Server.MapPath(".") & "/../images/cap.jpg"
' Open image to map to 3D surface
Set Image = Server.CreateObject("Persits.Jpeg")
Image.Open Server.MapPath(".") & "/../images/ps_logo.png"
pi = 3.14159265
' Specify UV parametera
With Jpeg.Canvas.UV
' Parametric equations of a spheroid
.XFunc = "cos(v) * cos(u)"
.YFunc = "cos(v) * sin(u)"
.ZFunc = "1.25 * sin(v)"
.UMin = pi / 4
.UMax = pi / 4
.VMin = 0.4
.VMax = 0.8
.UStep = 20
.VStep = 10
' Finetuning the position over canvas
.ShiftX = 19
.ShiftY = 3
' Camera view angle
.CameraAngle = 34
' Do now show portion of the surface over the cap's "horizon"
.HideBackFace = True
' Camera location and rotation
.CameraX = 2.4
.CameraY = 0.1
.RotateCameraAroundAxes 0, 30, 30
End With
' Draw image on the canvas
Jpeg.Canvas.DrawImageUV Image
Jpeg.Save Server.MapPath("caplogo.jpg")

C#:
<%@ Import Namespace="System.Web" %>
<%@ Import Namespace="ASPJPEGLib" %>
<%@ Page Language="C#" Debug="true" %>
<script runat="server" LANGUAGE="C#">
void Page_Load(Object Source, EventArgs E)
{
// Open canvas image
IASPJpeg objJpeg = new ASPJpeg();
objJpeg.Open( Server.MapPath(".") + "/../images/cap.jpg" );
// Open image to map to 3D surface
IASPJpeg objImage = new ASPJpeg();
objImage.Open( Server.MapPath(".") + "/../images/ps_logo.png" );
float pi = 3.14159265f;
// Specify UV parameters
IUV objUV = objJpeg.Canvas.UV;
// Parametric equations of a spheroid
objUV.XFunc = "cos(v) * cos(u)";
objUV.YFunc = "cos(v) * sin(u)";
objUV.ZFunc = "1.25 * sin(v)";
objUV.UMin = pi / 4;
objUV.UMax = pi / 4;
objUV.VMin = 0.4f;
objUV.VMax = 0.8f;
objUV.UStep = 20;
objUV.VStep = 10;
// Finetuning the position over canvas
objUV.ShiftX = 19;
objUV.ShiftY = 3;
// Camera view angle
objUV.CameraAngle = 34;
// Do now show portion of the surface over the cap's "horizon"
objUV.HideBackFace = true;
// Camera location and rotation
objUV.CameraX = 2.4f;
objUV.CameraY = 0.1f;
objUV.RotateCameraAroundAxes( 0, 30, 30 );
// Draw image on the canvas
objJpeg.Canvas.DrawImageUV( (ASPJpeg)objImage );
objJpeg.Save( Server.MapPath("caplogo.jpg") );
}
</script>

Click the links below to run this code sample:
http://localhost/aspjpeg/manual_11/11_caplogo.asp
http://localhost/aspjpeg/manual_11/11_caplogo.aspx
NOTE: To avoid ambiguity, everywhere in this chapter, an image such as a custom logo or photograph being mapped to a 3D surface is referred to simply as image,
while the main background image on which it is drawn is referred to as canvas.
NOTE: if you are familar with parametric equations, this section can be skipped.
To stretch an image onto a 3D surface, this surface must be mathematically defined. An arbitrary surface
can be represented by three mathematical equations with two parameters: x(u, v), y(u, v) and z(u, v).
To understand the parametric definition of 3D surfaces, let us first define a 2D curve. A twodimensional
curve can be defined with two equations with one parameter: x(u), y(u).
For example, a simple circle with the radius of 1 can be defined as follows:
x(u) = sin(u)
y(u) = cos(u)
u ∈ [0; 2π]
As the u parameter changes from 0 to 2π, the x(u) and y(u) coordinate values follow the unit circle with the center in (0, 0) and beginning
and end in (0, 1). This follows directly from the definitions of the trigonometric functions of sine and cosine.
In general, the sin/cos pair is indispensible in defining any circular curve or surface.
For rendering purposes, the circle is approximated with a polygon by having u assume
a finite set of values in the given range. In the picture below, the circle is approximated by a 16vertex polygon.
Silver orbs represent the vertices and rods depict edges connecting those vertices. The polygon fully resides
on the XYplane.
[Pic. 3: Parametrically defined circle]
A 3D surface can be thought of as a 2D curve moving through the threedimensional space and possibly changing its shape along the way.
This is where a 2nd parameter, v, and a third equation, z(u, v), come into play.
For example, a cylinder is simply a circle that moves upwards along the zaxis.
The x and y equations are the same as with a circle, and the z equation is a simple linear function of v.
Therefore, the equations of a cylinder with a radius of 1 and height of 1 are:
x(u, v) = sin(u)
y(u, v) = cos(u)
z(u, v) = v
u ∈ [0; 2π]
v ∈ [0; 1]
[Pic. 4: Parametrically defined cylinder]
In case of a hemisphere, the circle travels upwards and collapses at the same time.
Assuming that the v parameter changes from 0 to π/2, the vertical speed (alongside the zaxis)
is sinusoidal, while the horizontal rate of collapse (i.e. alongside x and y axes) is cosinusoidal, so the parametric equations of
a hemisphere with the radius of 1 are:
x(u, v) = cos(v) * sin(u)
y(u, v) = cos(v) * cos(u)
z(u, v) = sin(v)
u ∈ [0; 2π]
v ∈ [0; π/2]
[Pic. 5: Parametrically defined sphere]
Changing the lower limit of v from 0 to π/2 would produce a full sphere.
The code sample in the previous section defines a segment of the sphere by limiting the ranges for u and v to
[π/4; π/4] and [0.4; 0.8], respectively. Also the shape of the sphere is elongated along the z axis by using the factor 1.25 in the zexpression.
These adjustments were obtained via trial and error.
In case of a cone, the circle collapses in a linear fashion as it travels upwards. The equations of a cone with a radius of 1 and height of 1 are:
x(u, v) = (1  v) * sin(u)
y(u, v) = (1  v) * cos(u)
z(u, v) = v
u ∈ [0; 2π]
v ∈ [0; 1]
In case of a torus, the circle goes up, down and up again while its radius expands and collapses according to its own circular pattern.
Therefore, the equations of a torus with the radius of R and girth of r are:
x(u, v) = (R + r * sin(v)) * sin(u)
y(u, v) = (R + r * sin(v)) * cos(u)
z(u, v) = r * cos(v)
u ∈ [0; 2π]
v ∈ [0; 2π]
[Pic. 6: Parametrically defined cone and torus]
Straight surfaces can be defined by parametric equations as well.
The surface shown below is defined by the following equations:
x(u, v) = u
y(u, v) = v
z(u, v) = (abs(u) + abs(v)) / 2
u ∈ [1; 1]
v ∈ [1; 1]
[Pic. 7: Parametrically defined nonsmooth surface]
To specify the 3D surface equations for the DrawImageUV method to use,
the three formulas should be specified via the UV object's XFunc, YFunc and ZFunc properties.
These properties expect string values containing the equations in a form
common to most programming languages. The expressions are not casesensitive, and may contain numbers,
arithmetic operation symbols for addition (+), subtraction (), multiplication (*), division (/) and modulo (%),
nested parentheses, parameter names, special names and mathematical functions.
The parameter names are:
 u  the current iteration value of the Uparameter;
 v  the current iteration value of the Vparameter;
The following special names are currently supported:
 pi  π (3.1415926);
 rnd  a random number between 0 and 1.
The following mathematical functions are currently supported in an expression:
 abs(x)  absolute value of x;
 atn(x)  arc tangent of x, expressed in radians;
 cos(x)  cosine of x measured in radians;
 exp(x)  basee exponential of x;
 h(x)  Heaviside step function: returns 0 if x < 0 and 1 if x ≥ 0 (useful for piecewise functions);
 log(x)  basee logarithm of x;
 log10(x)  base10 logarithm of x;
 sgn(x)  returns 1 if x < 0, 1 if x > 0 and 0 if x = 0;
 sin(x)  sine of x measured in radians;
 sqr(x)  square of x;
 sqrt(x)  square root of x;
 tan(x)  tangent of x measured in radians;
The domains for the U and V parameters are specified via the UMin/UMax
and VMin/VMax properties, respectively. The number of iteration steps
for U and V are specified via the UNum and VNum properties, respectively.
The following conditions must be met:
UMax > UMin
VMax > VMin
UNum > 0
VNum > 0
When the DrawImageUV method is called, the three functions specified by XFunc, YFunc and ZFunc
are evaluated for every (U, V) combination within the specified domains [UMin, UMax] and [VMin, VMax]
and according to the specified number of iteration steps UNum and VNum. These evaluations produce
a grid of quadrilaterals approximating the 3D surface. For a better approximation, each of
the quadrilaterals is further split into two triangles in a crosseddiagonal pattern (see pictures below.)
The specified image is then stretched onto the surface according to the following rules:
 The lowerleft corner of the image is mapped to the point on the surface corresponding to the U/V values of (UMin, VMin).
 The upperright corner of the image is mapped to the point on the surface corresponding to the U/V values of (UMax, VMax).
 The horizontal scanning of the image from left to right corresponds to the change of the U parameter from UMin to UMax.
 The vertical scanning of the image from bottom to top corresponds to the change of the V parameter from VMin to VMax
For example, consider a surface defined by the following set of equations:
...
With Jpeg.Canvas.UV
.XFunc = "sqrt(sqr(u) + sqr(v)) / 2"
.YFunc = "u"
.ZFunc = "v"
.UMin = 1
.UMax = 1
.VMin = 1
.VMax = 1
.UStep = 10
.VStep = 10
End With
The picture of an apple is stretched onto this surface as follows (the vertex corresponding to the UMin/VMin pair is colored green):
[Pic. 8: Image superimposed over triangulated surface]
The actual output from the DrawImageUV method with the property .DisplayWireframe (explained below) set to True and False, respectively, is as follows:
[Pic. 9: Actual DrawImageUV output]
The complete underlying code for the images above is as follows:
Jpeg.New 500, 500, &HE0E0E0
Image.Open "c:\path\apple.jpg"
With Jpeg.Canvas.UV
.XFunc = "sqrt( sqr(u) + sqr(v)) / 2"
.YFunc = "u"
.ZFunc = "v"
.UMin = 1
.UMax = 1
.VMin = 1
.VMax = 1
.UStep = 10
.VStep = 10
.AmbientLight = 0.2
.DiffuseLight = 0.8
.SpecularLight = 0.5
.SpecularAlpha = 400
.DisplayWireframe = True ' or False
.WireframeColor = &HFFFFFFFF
.HideBackFace = True
.Shading = True
.SetLightVector 1, 1, 1
.CameraX = 2.7
.CameraAngle = 34
.RotateCameraAroundAxes 0, 15, 25
End With
Jpeg.Canvas.DrawImageUV Image
Scaling (including negative scaling) can be applied to the image being stretched via the properties ScaleX and ScaleY.
These scale values must not be equal to 0. A number whose absolute value is greater than 1 shrinks the image along the corresponding
axis into a tiling pattern, and a number whose absolute value is between 0 and 1 stretches it. A negative number for ScaleX and/or ScaleY also
flips the image horizontally and/or vertically, respectively.
Pic. 10 (left) is obtained by adding the following lines
to the code above:
...
With Jpeg.Canvas.UV
...
.ScaleX = 0.7
.ScaleY = 2.3
...
End With
while Pic. 10 (right) uses the values
...
With Jpeg.Canvas.UV
...
.ScaleX = 2.1
.ScaleY =  1.4
...
End With
[Pic. 10: The effect of ScaleX and ScaleY properties]
11.4.1 DrawImageUV's Virtual Camera Overview
When the method
objJpeg.Canvas.DrawImageUV( objImage );
is called, the 3D surface defined via the objJpeg.Canvas.UV object's properties is evaluated,
and the image represented by objImage is stretched onto it.
The 3D surface "lives" in a virtual 3D world and is viewed through the lens of a virtual camera (see Pic. 11 below.)
The camera's area of vision has the shape of a squarebased pyramid. The portion of the 3D surface within the camera's view (i.e. inside the "pyramid")
is projected onto the camera's virtual "sensor" (shown in cyan color in the picture below.)
The 3D view from the virtual sensor is then rendered onto the canvas of the image represented by the objJpeg object
(that is, the object on which the DrawImageUV method is called.)
[Pic. 11: Virtual Camera and Virtual Sensor]
During rendering onto the canvas, the squareshaped 3D view is scaled in such a way that it occupies the maximum area of the canvas.
If the canvas is in landscape orientation, the 3D view fully covers it vertically (Pic. 11a, left)
and if it is in portrait orientation, it fully covers it horizontally (Pic. 11a, right.)
[Pic. 11a: The scaling of 3D view during rendering onto landscape (left) and portrait (right) canvases]
By default, the 3D view is centered within the canvas, but for finetuning purposes
can be shifted up, down and sideways via the properties ShiftX and ShiftY.
11.4.2 Perspective Projection
The projection method used by the virtual camera is called "perspective projection" which approximates actual visual perception
by drawing objects in the distance smaller than objects close by.
The camera's view angle (i.e. the angle between two opposite sides of the "pyramid") controls the degree of perspective
(i.e. the degree of size distortion between far and near objects.) The camera's default view angle is 30°
and can be adjusted via the CameraAngle property. The following three images demonstrate the effect
of various view angles as well as the proximity of the surface being rendered to the camera:
.CameraX = 4.6
.CameraAngle = 15


.CameraX = 2.3
.CameraAngle = 30


.CameraX = 1.35
.CameraAngle = 55


In the 2nd and 3rd images above, the curvature of the peripheral portions of the surface expose the backface areas.
To avoid this, the property HideBackFace should be set to True:
.CameraX = 1.35
.CameraAngle = 55
.HideBackFace = True


11.4.3 Camera's Position & Orientation
By default, the camera is positioned on the Xaxis at the coordinate (5, 0, 0) with its
lens pointing towards the coordinate origin, in the opposite direction to the Xaxis. Its vertical axis
is parallel to the Zaxis of the coordinate system, and its lateral axis parallel to the Yaxis.
The camera's location in the virtual space can be changed via the properties CameraX,
CameraY and CameraZ.
However, changing these properties does not change the camera's orientation is space.
To rotate the camera around its own center, the RotateCamera method should be used.
This method accepts three angles (in degrees): the pitch angle, the bank angle and the yaw angle,
to rotate the camera around its lateral (lefttoright), longitudinal (fronttoback)
and vertical axes, respectively:
[Pic. 12: Camera's Three Rotations: Pitch, Bank & Yaw]
The order in which these three rotations are performed is significant.
The RotateCamera method first performs the pitch rotation, then bank and then yaw, in that order.
Therefore, the code
.RotateCamera 10, 20, 0
has the same effect as
.RotateCamera 10, 0, 0
.RotateCamera 0, 20, 0
but not the same effect as
.RotateCamera 0, 20, 0
.RotateCamera 10, 0, 0
because in the latter case, the 20° bank is performed prior to the 10° pitch.
The camera can also be rotated around the global X, Y and Z axes via the method RotateCameraAroundAxes.
This method expects three angles (in degrees): the angle of rotation around the Xaxis, Yaxis and Zaxis, respectively.
The method performs the rotations first around the Xaxis, then around the Yaxis and finally around the Zaxis.
Like the RotateCamera method, the order in which these three rotations are performed is significant. Therefore the code
.RotateCameraAroundAxes 0, 20, 30
has the same effect as
.RotateCameraAroundAxes 0, 20, 0
.RotateCameraAroundAxes 0, 0, 30
but not the same affect as
.RotateCameraAroundAxes 0, 0, 30
.RotateCameraAroundAxes 0, 20, 0
To reset the camera to its original position, orientation and view angle, use the method ResetCamera.
11.4.4 Displaying the Wireframe
As mentioned before, the DrawImageUV method approximates a 3D surface with a finite set of vertices
lying on this surface. Neighboring vertices are connected to each other with edges to form a grid of cells,
and each cell is further split into two rectangles (triangulated) with a diagonal edge.
This set of vertices and edges forms what is known as a "wireframe model" on which the specified image is stretched.
Normally the wireframe itself is invisible. However, during the coding and debugging phase of developing an application
using the DrawImageUV method, being able to see the wireframe can be invaluable
as it helps position, orient and configure the virtual camera and other parameters so that the 3D surface being rendered blends with the content of
the canvas the best way possible.
To make the wireframe visible, the property DisplayWireframe must be set to True.
The wireframe color (white by default) is controlled via the WireframeColor
property. The wireframe thickness (1 by default) is specified via the WireframeThickness
property.
By default, if DisplayWireframe is set to True, the wireframe is displayed
over the image being rendered. To display the wireframe by itself,
the property DisplayImage should be set to False.
Shading in computer graphics is the process of altering the color of a surface based on its
position and orientation relative to light sources to achieve a photorealistic effect.
An evenly lit curved image may still appear flat (Pic. 13, left), while shading accentuates its depth (Pic. 13, right.)
[Pic. 13: An evenly lit image (left) and the same image with shading applied to it (right)]
The underlying script for Pic. 13 (right) is as follows:
Image.Open "c:\path\logo.png"
Jpeg.Open "c:\path\pen.jpg"
With Jpeg.Canvas.UV
.Shading = True
.DiffuseLight = 0.6
.AmbientLight = 0.2
.SpecularAlpha = 400
.SpecularLight = 0.1
.SetLightVector 1, 0, 0
.XFunc = ".13 * sin(v)"
.YFunc = ".13 * cos(v)"
.ZFunc = "u"
.VMin = 7.48
.VMax = 2.4
.UMin = 0.8
.UMax = 0.8
.UStep = 16
.VStep = 24
.ShiftX = 8
.ShiftY = 8
.CameraX = 5.8
.CameraAngle = 13
.RotateCamera 0, 45, 0
.HideBackFace = True
End With
...
The DrawImageUV method implements Gouraud shading along with the Phong reflection model based on a single light source.
To turn on shading, the UV property Shading must be set to True. The shading effect is subject to the direction of the light vector,
ambient, diffuse and specular light components, and several other parameters (described below.)
To make it easier for the application developer to select the right shading parameters,
the shading of the 3D surface being rendered can be displayed by itself, without the overlaying image.
To enable this debug mode, the property DisplayShading must be set to True and DisplayImage to False.
The default shading parameters (AmbientLight = 0.4, DiffuseLight = 0.6, SpecularLight = 0.3,
SpecularAlpha = 100 and the light vector parallel to the Xaxis)
produce the following shading on a spherical surface:

.XFunc = "sin(v) * sin(u)"
.YFunc = "sin(v) * cos(u)"
.ZFunc = "cos(v)"
.UMin = 0
.UMax = 2 * pi
.VMin = 0
.VMax = pi
.UStep = 120
.VStep = 60
.DisplayImage = False
.DisplayShading = True
.Shading = True
.CameraX = 2.4

[Pic. 14: Default shading applied to a sphere]
Shading is computed based on the angle between the light vector and the normal vector of each tile.
The latter depends on the underlying X/Y/Z formulas and may point in the "wrong" direction.
For example, the sphere shown on Pic. 14 has its normals pointing outwards. However, if the spherical
surface were based on the formulas with the sin(u) and cos(u) flipped,
the normals would be pointing inwards and the shading would look flat (see Pic. 15). To fix this, the property FlipNormals
should be set to True.

.XFunc = "sin(v) * cos(u)"
.YFunc = "sin(v) * sin(u)"
.ZFunc = "cos(v)"
.UMin = 0
.UMax = 2 * pi
.VMin = 0
.VMax = pi
.UStep = 120
.VStep = 60
.DisplayImage = False
.DisplayShading = True
.Shading = True
.CameraX = 2.4
'[.FlipNormals = True needed]

[Pic. 15: Normals pointing in the wrong direction: .FlipNormals = True is required.]
Shading is based on three light components: ambient, diffuse and specular. The values for these components are specified via the properties
AmbientLight, DiffuseLight and SpecularLight, respectively. The values are arbitrary
but should normally be between 0 and 1, with the sum of AmbientLight and DiffuseLight not exceeding 1.
When SpecularLight is greater than 0, the size of the specular highlight on the surface is determined
by the SpecularAlpha property which is 100 by default but can be set to a much higher value such as 500.
The higher the alpha the smaller is the highlight area.
The light vector by default points in the same direction as the camera (parallel to the Xaxis in the opposite direction to it).
The direction of light can be changed via the SetLightVector method.

.XFunc = "sin(v) * sin(u)"
.YFunc = "sin(v) * cos(u)"
.ZFunc = "cos(v)"
.UMin = 0
.UMax = 2 * pi
.VMin = 0
.VMax = pi
.UStep = 220
.VStep = 160
.AmbientLight = 0.6
.DiffuseLight = 0.4
.SpecularLight = 0.6
.SpecularAlpha = 500
.SetLightVector 1, 1, 1
...


.XFunc = "sin(v) * sin(u)"
.YFunc = "sin(v) * cos(u)"
.ZFunc = "cos(v)"
.UMin = 0
.UMax = 2 * pi
.VMin = 0
.VMax = pi
.UStep = 220
.VStep = 160
.AmbientLight = 0.2
.DiffuseLight = 0.8
.SpecularLight = 0.2
.SpecularAlpha = 300
.SetLightVector 1, 1, 1
...

[Pic. 16: More shading examples]
When applied to an actual image, shading acts as a brightness map, or more precisely, as a brightness change
map. A shading value for each pixel is a number between 0 and 1, and is displayed as a grayscale color on Pic. 14 and Pic. 16.
By default, the shading value of 0 decreases the brightness of the corresponding pixel by 100 points, i.e. its R, G and B
values in the range [0; 255] are decreased by 100 points each. Similarly, the shading value of 1 increases
the brightness of the corresponding pixel by 100 points. The default [100; +100] range
can be changed via the properties BrightnessMin and BrightnessMax. The change in brightness is calculated as follows:
ΔB = BrightnessMin + (BrightnessMax  BrightnessMin) * shading
Pic. 17 below demonstrates the effect of changing these properties when an image
is rendered over a sphere with the the same shading settings as in Pic. 16 (bottom).

.XFunc = "sin(v) * sin(u)"
.YFunc = "sin(v) * cos(u)"
.ZFunc = "cos(v)"
.UMin = 0 + pi
.UMax = 2 * pi + pi
.VMin = 0
.VMax = pi
.UStep = 80
.VStep = 40
.FlipNormals = True
.ScaleX = 1
.RotateCameraAroundAxes 0, 20, 0
' No shading
.Shading = False


...
.Shading = True
.AmbientLight = 0.2
.DiffuseLight = 0.8
.SpecularLight = 0.2
.SpecularAlpha = 300
.SetLightVector 1, 1, 1
' Default brightness properties
'.BrightnessMin = 100
'.BrightnessMax = 100


...
.BrightnessMin = 200
.BrightnessMax = 80

[Pic. 17: The effect of BrightnessMin and BrightnessMax properties]
This section demonstrates a stepbystep process to create
the picture of a coffee mug with the image of a painting wrapped around it:
We begin by defining a 3D surface "hugging" the picture of the coffee mug tightly.
Since the mug has the shape of a cylinder, we use the equations of a cylinder
(see Section 11.2  Parametric Representation of 3D Surfaces).
We enable the wireframe mode and switch off the image of the painting temporarily.
The first iteration produces the following result:

.XFunc = "cos(u)"
.YFunc = "sin(u)"
.ZFunc = "v"
.UMin = 0
.UMax = 2 * pi
.VMin = 1
.VMax = 1
.UStep = 20
.VStep = 10
.DisplayWireframe = True
.WireframeColor = &HFFFF0000
.DisplayImage = False

The next adjustment involves moving the camera closer to the object, and also shifting the entire picture to the left
by using the ShiftX property:

...
.CameraX = 3
.ShiftX = 48

It is now clear that the degree of perspective used by the virtual camera is much greater than that
of the camera used to photograph the mug. We reduce the degree of perspective
by decreasing the virtual camera's view angle (30° by default) to about 9°. This adjustment enlarges the wireframe object,
so the virtual camera needs to be moved farther away to keep the object's size unchanged. We also need to rotate the virtual camera
around the Yaxis to approximately match the position of the real camera relative to the mug.
For a slightly better alignment, the camera is banked 0.5°.

...
.CameraAngle = 9
.CameraX = 9.5
.RotateCameraAroundAxes 0, 19, 0
.RotateCamera 0, 0.5, 0

For a tighter fit, the cylinder radius is reduced from 1 to 0.96, and the the range for the V
parameter is changed from [1; 1] to [1.05; 1.15].
The range for the U
parameter is modified to have the cylinder not cover the handle area of the mug.
For a better approximation, the UMax parameter is increased from 20 to 60.

...
.XFunc = "0.96 * cos(u)"
.YFunc = "0.96 * sin(u)"
.VMin = 1.05
.VMax = 1.15
.UMin = 3 * pi / 2 + 0.4
.UMax = pi / 2  0.4
.UStep = 60

At this point, the wireframe model looks satisfactory. It is time to switch off the wireframe mode
and bring up the image of the painting. We also need to hide the backface areas.

...
.DisplayWireframe = False
.DisplayImage = True
.HideBackFace = True

To give the image of the painting some depth, we need to turn on shading.
To view the brightness map, the image should temporarily be switched off
and shadingdisplay mode enabled. Also, the light should come slightly from the left
to match the shading of the mug, and therefore a call to SetLightVector is in order.

...
.Shading = True
.DisplayImage = False
.DisplayShading = True
.AmbientLight = 0.05
.DiffuseLight = 0.95
.SetLightVector 3, 1, 0

And for the final result, we adjust BrightnessMin and BrightnessMax, switch off the shadingdisplay
mode and turn the image of the painting back on:

...
.DisplayImage = True
.DisplayShading = False
.BrightnessMax = 60
.BrightnessMin = 200

The entire script is presented below:
VB Script:
' Open canvas image
Set Jpeg = Server.CreateObject("Persits.Jpeg")
Jpeg.Open Server.MapPath(".") & "/../images/mug.jpg"
' Open image to map to 3D surface
Set Image = Server.CreateObject("Persits.Jpeg")
Image.Open Server.MapPath(".") & "/../images/vangogh.jpg"
pi = 3.14159265
' Specify UV parameters
With Jpeg.Canvas.UV
' Parametric equations of a cylinder
.XFunc = "cos(u)"
.XFunc = "0.96 * cos(u)"
.YFunc = "0.96 * sin(u)"
.ZFunc = "v"
.UMin = 3 * pi / 2 + 0.4
.UMax = pi / 2  0.4
.VMin = 1.05
.VMax = 1.15
.UStep = 60
.VStep = 10
' Finetuning the position over canvas
.ShiftX = 48
' Camera view angle
.CameraAngle = 9
' Do now show backface areas
.HideBackFace = True
' Camera location and rotation
.CameraX = 9.5
.RotateCameraAroundAxes 0, 19, 0
.RotateCamera 0, 0.5, 0
' Shading
.AmbientLight = 0.05
.DiffuseLight = 0.95
.SetLightVector 3, 1, 0
.Shading = True
.BrightnessMin = 200
.BrightnessMax = 60
End With
' Draw image on the canvas
Jpeg.Canvas.DrawImageUV Image
Jpeg.Save Server.MapPath("mugpainting.jpg")

C#:
<%@ Import Namespace="System.Web" %>
<%@ Import Namespace="ASPJPEGLib" %>
<%@ Page Language="C#" Debug="true" %>
<script runat="server" LANGUAGE="C#">
void Page_Load(Object Source, EventArgs E)
{
// Open canvas image
IASPJpeg objJpeg = new ASPJpeg();
objJpeg.Open( Server.MapPath(".") + "/../images/mug.jpg" );
// Open image to map to 3D surface
IASPJpeg objImage = new ASPJpeg();
objImage.Open( Server.MapPath(".") + "/../images/vangogh.jpg" );
float pi = 3.14159265f;
// Specify UV parameters
IUV objUV = objJpeg.Canvas.UV;
// Parametric equations of a spheroid
objUV.XFunc = "cos(u)";
objUV.XFunc = "0.96 * cos(u)";
objUV.YFunc = "0.96 * sin(u)";
objUV.ZFunc = "v";
objUV.UMin = 3 * pi / 2 + 0.4f;
objUV.UMax = pi / 2  0.4f;
objUV.VMin = 1.05f;
objUV.VMax = 1.15f;
objUV.UStep = 60;
objUV.VStep = 10;
// Finetuning the position over canvas
objUV.ShiftX = 48;
// Camera view angle
objUV.CameraAngle = 9f;
// Do now show backface areas
objUV.HideBackFace = true;
// Camera location and rotation
objUV.CameraX = 9.5f;
objUV.RotateCameraAroundAxes( 0, 20, 0 );
objUV.RotateCamera( 0, 0.5f, 0 );
// Shading
objUV.AmbientLight = 0.05f;
objUV.DiffuseLight = 0.95f;
objUV.SetLightVector( 3, 1, 0 );
objUV.Shading = true;
objUV.BrightnessMin = 200;
objUV.BrightnessMax = 60;
// Draw image on the canvas
objJpeg.Canvas.DrawImageUV( (ASPJpeg)objImage );
objJpeg.Save( Server.MapPath("mugpainting.jpg") );
}
</script>

Click the links below to run this code sample:
http://localhost/aspjpeg/manual_11/11_mugpainting.asp
http://localhost/aspjpeg/manual_11/11_mugpainting.aspx
The script above is also used in Live Demo #8  UV Projection.
An older version of AspJpeg (2.5) introduced a feature enabling an image to be stretched over an arbitrary 4point polygon
to emulate a perspective projection effect. This feature is described in Section 6.4 of this user manual
and summarized in Pic. 18 below. An image undergoing this type of deformation may be subject to significant distortions as seen on Pic. 18 (right.)
[Pic. 18: Quasiperspective projection of Section 6.4: round shapes are clearly distorted.]
Being a true perspective projector, the DrawImageUV method produces a much more accurate
and distortionfree output. But to have the projection align well with the 4point shape on the background (such as the picture of the billboard on Pic. 18),
the position, rotation and view angle of the virtual camera need to be somehow calculated based on the coordinates of the 4 points
of the shape and overall background dimensions. The manual trialanderror approach used in the previous section may not
yield satisfactory results within a reasonable timeframe.
To help deal with rectangular shapes, we have developed UVFinder, a JavaScriptbased application, which takes the width and height of the background image
and the coordinates of the 4 points of the desired projection as input, and produces the camera parameters
in the form of a VB or C# script as output. The image below (Pic. 19) was generated with the help of UVFinder.
[Pic. 19: True perspective projection with DrawImageUV method: no distortions.]
Click here to launch UVFinder in
a separate window.
Once started, UVFinder runs indefinitely until stopped by the user.
In search for a good alignment between the desired 4point outline and the actual projection,
the application sifts through millions of camera positions in a matter of minutes,
and displays the one with the lowest deviation (calculated
as the sum of distances between the corresponding points) it has found so far.
It also displays the current best projection graphically over the specified outline (Pic. 20).
[Pic. 20: UVFinder Interface.]
For a perfect alignment, the process should run until the deviation is down to single digits,
or better yet, falls below 1. The following VBScript output was generated by UVFinder for the
image size and coordinate values applicable to the billboard image above.
It took several hours of processing to bring the deviation down to a mere 0.104.
.XFunc = "0"
.YFunc = "u"
.ZFunc = "v"
.UMin =2.45592 / 2
.UMax = 2.45592 / 2
.VMin = 0.81449
.VMax = 0.81449 + 1
.UStep = 10
.VStep = 10
.CameraAngle = 29.97892
.CameraX = 2.18979
.CameraY = 0.00089
.CameraZ = 0.09539
.RotateCameraAroundAxes 10.32625, 0, 0
.RotateCameraAroundAxes 0, 32.60264, 0
.RotateCameraAroundAxes 0, 0, 55.94800
.RotateCamera 1.13119, 0, 0
.RotateCamera 0, 3.65646, 0
.RotateCamera 0, 0, 10.36281
The code above is used in Live Demo #7  Perspective Projection.
The following table summarizes all the properties and methods of the UV object. Required properties are shown in bold.
Property/Method  Default Value  Comments 
3D Surface Definition 
XFunc   Specify the three functions (X, Y, Z) defining the 3D surface, ranges for the U and V
parameters, and the number of steps for these parameters. 
YFunc  
ZFunc  
UMin  
UMax  
VMin  
VMax  
UNum  
VNum  
HideBackFace  False  If set to True, hides the tiles whose normals face away from the camera. 
Camera Position and Rotation 
CameraX  5  Specify camera coordinates. 
CameraY  0 
CameraZ  0 
ResetCamera   Change camera's orientation in space. 
RotateCamera  
RotateCameraAroundAxes  
CameraAngle  30  Specifies the camera's view angle which affects the degree of perspective. 
Shading 
AmbientLight  0.4  Specify the three light components and direction of light. 
DiffuseLight  0.6 
SpecularLight  0.3 
SpecularAlpha  100 
SetLightVector  (1, 0, 0) 
BrightnessMin  100  Specify the mapping between the shading values and the change in brightness
of the corresponding pixels of the image being rendered. 
BrightnessMax  100 
FlipNormals  False  If set to True, flips the direction of the normal vectors for all tiles. If HideBackFace is also set to True,
all tiles that were previously visible become hidden and vice versa. 
Scaling 
ScaleX  1  Specify the scaling factors that affect the way the image is stretched onto the 3D surface. 
ScaleY  1 
Shifting 
ShiftX  0  Specify the horizontal and vertical shifts of the center of the 3D view over the center of the canvas. 
ShiftY  0 
Debugging/Helper Properties 
DisplayImage  True  Switch the image, wireframe and shading on and off for debugging purposes. The three Boolean properties
cannot all be False. Also, DisplayImage and DisplayShading cannot both be True but they can both be False if DisplayWireframe is True. 
DisplayShading  False 
DisplayWireframe  False 
WireframeColor  White 
WireframeThickness  1 

Advanced
Image Management
for ASP and .NET 
All Rights Reserved. AspJpeg is a trademark of Persits Software, Inc. 

