1. To scan:

    first point, second point no of steps draw first pixel next pixel

  2. DDA algorithm: (all slopes)

    dx, dy = second point - first point steps = max(abs(dx), abs(dy)) x_increment = dx/steps, y_increment = dy/steps

  3. Bresenham algorithm: (|m|<1, left point first)

    dx, dy = second point - first point; p = 2dy-dx steps = dx x_increment = 1, y_increment = { if p < 0: 0, p += 2dy else: 1, p += 2dy-2dx }

  4. Midpoint circle:

    p = 5/4-r, 1-r (if r is an integer) while x >= y x_increment = 1, y_increment = { if p < 0: 0, p += 2x(recent calculated after increment)+1 else: -1, p += 2x(recent)+1-2y(recent) }

  5. Ellipse:

    p1 = f_ellipse(x+1,y-0.5) while ryryx >= rxrxy x_increment = 1, y_increment = { if p < 0: 0, p1 += [2x(recent calculated after increment)+1]ryry else: -1, p1 += [2x(recent calculated after increment)+1]ryry - [2y(recent)]rxrx }

    p2 = f_ellipse(x+0.5,y-1) while y>0 y_increment = -1, x_increment = { if p > 0: 0, p2 += -[2y(recent calculated after increment)-1]rxrx else: 1, p += -[2y(recent calculated after increment)-1]rxrx+[2x(recent)]ryry }

  6. 2D Pipeling world coordinates viewing coordinates normalized coordinates viewport coordinates

    From WC to VC: (coordinates change) Translate: take the origin of view coordinates to world coordinates’s Rotate: rotate so we align

    FROM VC TO ViewPort: (choose size of window in world coordinates and of viewport in normalized coordinates) xv = xvwin + (xw-xwmin)sx; sx = (xvmax-xvmin)/(xwmax-xwmin) yv = yvwin + (yw-ywin)sy; sy = (yvmax-yvmin)/(ywmax-ywmin) First scaling on the fixed point (xwmin,ywmin) then translate

  7. Pipeling: Point clipping Cohen-Sutherland Line clipping: (The screen is divded into 9 regions accordint to the values of xvmin, xvmax, yvmin, yvmax identified by a four-bit pattern) Assign region codes to the end points of each line Perform OR operation to end-points of a line If OR = 0000: then inside the window Else: If AND != 0000: outside the window else: considered for clipping as there is possibility of partiality Find slope (m): > if passes through top or intersects with top boundary: x = x+(ywmax-y)/m y = ymax

     	> ...
     	> Replace the point and discard the remaining lines
     	> Repeat until the line is trivially accepted or rejected
     	
    

    Liang-Barsky Line clipping: >uses the parametric form of the lines: x = x1+tDx y = y1+tDy, where t goes from 0 to 1 >point-clipping condition in parametric form: xwmin x1+tDx xwmax ywmin y1+tDy ywmax >can be expressed as tpk qk for each line p1 = -Dx, q1 = x1-xwmin (left boundary) p2 = Dx, q2 = xwmax-x1 (right boundary) p3 = -Dy, q3 = y1-ywmin (bottom boundary) p4 = Dy, q4 = ywmax-y1 (top boundary), boundary means infinite extension >if pk = 0 for some value of k then parallel to that boundary if qk < 0, completely outside if qk >= 0, completely inside >pk < 0: line proceeds from outside to inside of that boundary >pk > 0: line proceeds from inside to outside of that boundary >For non-zero pk, the value of t that corresponds to point where the line intersects the boundary k as: t = qk/pk >t1 = max(0, rk’s = qk/pk for each k: pk < 0) >t2 = min(1, rk’s = qk/pk for each k: pk > 0) >if u1 > u2, the line is completely outside >else: the the endpoints of the clipped line are calculated from the two values of parameter u

  8. 3D Pipelining

    Translation Scaling Rotation: i) Translate the object so that the rotation axis passes through the coordinate origin ii) Rotate the object so the axis aligns with one of the coordinate axis iii) Rotate the object by required angle along that coordinate axis iv) Inverse rotation of ii) v) Inverse translation of i) OR in lund, R(axis)[theta] = T(-1)R_-x(a)R_-y(b)R_z(theta)R_y(b)R_x(a)T

     	cos(a) = unit of (projection of axis (u) onto yz * z-axis)
     	cos(b) = unit of projection of (new u) onto z axis
     	
     	or in simples,
     	R(axis through O)[theta] = (cos)I+(sin)ux+(1-cos)uxOux
    

    WC VC: n = unit of N = (n1,n2,n3) u = unit of V cross N = (u1,u2,u3) v = n cross u VC PC: x’ = x + u(xrp-x) y’ = y + u(yrp-y) z’ = z + u(zrp-z) On the third column: -xrp/zrp, -yrp/zrp, 0, -1/zrp If window shear by a = -(xrp - ())/zrp b = -(yrp - ())/zrp translate by -axrp -bxrp

MODELLING: >representation schemes for solid objects are often divied into two broad categories Boundary representation describes as a set of surfcaes that separate the object interior from the environment eg polygon facets, Space-partionaring describes interior properties by partioning the spatial region contatining object into set of small non overlapping solids eg octree >Most used boundary represntaion for a 3d object is a set of surface polygons that enclose the interior. Speeds up due to linearty.

>specify a polygon surface with a set of vertex coordinates and associated attribute parameters, as information for each polygon is input, the data are placed into tables that are to be used in subsequent processing and displayy ofobjects in a secne.
>can be organized into two groups: geomteric data tables contain vertex coordinate and paramters to idenity the spatial orientation of the polygon surceas. Attribute table for an object includes parameters specifying the surface reflexivity, texture characteristics and degree of transparency of the object
>convenient to store geomteric data to create three lists:
	i) a vertex table: coordinate values for each vertex in the object are stored in the vertex table
	ii) edge table contains pointers back into the vertex table to idnetify the vertices for each polygon edge
	iii) polygon table contains pointers back into the edge table to identify edges for each polygon
>(example of one triangle and one quad joined together)
>Aletrnaltively draw just vertex and polygon but some edges could get drawn tiwce
>Or to use only polygont able but this duplicates coordinate information
>addiitonally slope for each edge
>checked for consistency; if every vertex is listed as an endpoint of two edges, if every edge part of at least one polygon, that polygon is closed, if each polygon has at least one shared edge


>To ge the plane equation we need three non-linear vertices:
	Ax + By + Cz + D = 0 or (A/D)x+(B/D)y+(C/D)z=-1:
	gives by cramers rule 
>if polygon vertices are specificed in counterclockwise direction when vieing the outer side of the plane (not from the view camera but with our head) in a right handed coordinate syste, the direction of the normal vector will be from inside to outside
>so to get normal vector that points outward we have,
	N = (V2-V1)cross(V3-V1)

8) Back-face detection: if V.N > 0: hide else show usually V(0,0,-1)

  1. Z-buffer:

    compares surface depths at each pixel position on the projection plane, where the surface depths are usually measured from the view plane along the z axis of a viewing system Each surface of a scene is processsed separately, one point at a time across the surface. Usually applies to scenes contatining only polygon surfaces, because depth values can be computed very qucikly and the method is easy to implement for each pixel position (x,Y) on the view plane, object depths can be compared by comparing z-values two buffers are required, depth buffer to store dept values for each (x,y) position and refresh buffer stores the intensity values for each

    i) initialize the depth buffer and refresth buffer so that for all buffer posiiton (x,y), depth(x,y) = 0, refresh(x,y) = Iback ii) For each positoin on each polygon surface, compare depth to previous stored values in the depth buffer to determine visibility >calcu++late the depth z for each (x,y) pos on the polygon: z = (-Ax-By-D)/C if depth at (x,y) is calculated then depth z’ of next position (x+1): z’ = z-A/C (is constant for polygon surfaces) for vertical its z’ = z+B/C >if z > depth(x,y) then set depth(x,y) = z, refresh(x,y) = Isur(x,y) (different when viewing along +z axis) where Iback is the value for the background intensity and Isurf is the projected intensity value for the surcae at poixel posiotn. After all surfaces are proccessed the depth buffer contains depth values for the visible surceas and the refresh buffer ocntains the corresponding intensity values

  2. A-buffer method:

    The z finds one visible surface at each pixel position, it deals only with opaque surfaces and cannot accumulate intensity values for more than one surface, as is necessary if transparent surfaces are to be displayed A-buffer expands the depth buffer, so that each position in the buffer can reference a linked list of surfaces thus more than on surface intensity can be taken into consideration at each pixel position, and object edges can be antialised Has two fields: depth field- stores a positive or negative real number intensity field- stores surface insitey information or a pointer value If depth filed is positive, number stored at that position is the dpeth of a single surface overlapping hte corresponding pixel area. The intensity field then stores the RGB components of the surface color at that point and the percent of pixel coverage If is negative, this indicates multiple surface contributions to the pixel intensity so insitenty filed stores a pointer to a linked list of surface data is contructed similarlly. Scan lines are processed to determine surface overlaps of pixel across the invidual scanlines. Using opacity factors and percent of surface overlaps, we can calculate the intensity of each pixel as an average of the contributions from the overlapping surfaces

  3. Scan-line method:

    deals with muultiple surfaces at once scanlines that lie on the projectin plane, all polygon surfaces intersecting that line are examined to deterine which are visible, if overlappping surface, depth calculations to see which is nearest to view plane, when determined intensity values stored in refresh buffer i) Initialize the required data structures: >edge table >polygon table with coefficeint of planes >active edge list (AEL) contains edges that intersected by curernt scan line in increasing order of x >define a flag for each surface that is set on or off to indicate whether a position along a scan line is inside or outside of the surface ii) Go from left to right for each scan line: >At the leftmost boundary of a surcae, the surface flag is turned on and at rightmost turned off >if only one flag is on then the intensity values are saved on buffer >if multiple then do the depth calculation and the nearest ones are saved …Z WALA CALCUATION >example >advantage of coherence along the scan lines as we pass from one scan line to the next because there is rarely flag change as we move from scan line to next

  4. Lightning: model of light emitter: point sources radially diverging paths from the source position, distributed area of source is not small compared to surfaces in the scene

    when light is incident on an opaque surface, part of it is reflected and part is absorbed, the amount of incident light reflected by a surface depends on the type of material, shiny reflects more while dull absorbs. Some also transmitted through the materials rough surfaces tend to scatter the reflected light in all direction, this light is diffuse reflection (pic) so equally bright from all direction. What we call color of an object is the color of the diffuse relfection of the incident light, a blue object and white light example in addition to diffuse reflection, light sources create highlights or bright spots called specular reflection. The lightning effect is more pronounced on shiny surfaces (pic)

    DIFFUSE REFLECTION: >fractional amount of incident light hat is diffusely refected can be set for each surface with parameter kd, the diffuse-reflection coefficeint or diffuse reflectivity (kd goes from 0; absorber to 1; reflector so reflected light has intensity near that of incident light) i) Ambient light: event though not exposed to light source is visible, due to reflections off other objects, simple way to model the combination of those relfections by a constant value Iamdiff = kd * Ia (intensity of ambient light) ii) Point light: can model point source that a surface reflects and scattered in all direction equally, based on lambert’s cosine law, a surface that is oriented perpendicular to the direction of the incident light appears brighter than if the surface were tilted at an oblique angle Ii,diff = kd * Ii(intensity of light source)*(N.L), unit vectors from surface normal and to direction of light (if negative then light source is behind) iii) Both combination: Itotal = kd (Ia + Ii (N.L)) SPECULAR REFLECTION: >when we look at an illuminated shiny surface such as polished metal, an apple, or a person’s forehead we see a hightlight or a bright spot at certain viewing direcitons. This phenomena is result of total reflection of the incidednt light in a concentrated region around the specular reflection angle. >Angle of specular is same as angle of incident light (law of reflection). Also one vector V that points to the viewer from the surface position. (at an angle phi from vetor R). >According to phong’s model, intensity is prop to cos^ns phi where surfaces are assigned specular-reflection paramter ns determined by type of surface that we want to display (100 for shiny near to 1 for dull) >Also depends on angle of incidence, polarizaiton and color of light. which can be approximately model monochromatic specular intensiy variations using a specular reflection coefficeint, W(theta) theta is angle of incidnece. which generaly remains constant for a level and grow high for high angles. High for silver, gold then glass Isecp = W(theta)Ii (V.R)^ns >For many opaque materials specular reflection is nearly constant for replace with ks >In many cases V.R can be approximated by N.H where H = (L+V)/|L+V|

    FINAL COMBINED USING PHONG: I = Idiff + Ispec = KaIa + KdIi(N.L)+KsIl(N.H)^ns For multiple light sources it is done by summation rule To ensure that nay pixel intensity does not exceed the maximum allowable value, we can apply normalization compensating variation of intensity with distance: f(d) = 1/(a0+a1d+a2d^2) So mulitply the coefficients by f(d)

  5. Shading:

    scan line algorithms typically apply a lightning model to obtain polygon surface rendering in one of two ways. Each polygon can be rendered with a single intesity, or the intensity can be obatined at each point of the surface using an interpolation scheme constant-intensity shading (flat): a single intensity is calculated for each polygon then all points over the surface of the polygon are then displayed with the same intensity value, useful for qucikly displaying gneneral appearence of curved surfaces, good when object is polyhendron and is not an approximation of an object with a curved surface, if light soruces are sufficeintly far from the surface Gouraud Shading: this intensity-interpolation scheme, developed by gouraud and generaly reerred to as gouraud shading renders a polygon surface by lilnearlly interpolating intensity values across the surface >intensity values for each polygon are matches with the values of adjacent polygons along the common edges, thus eliminating the intesnity discontinuitites that can occur in flat shading >cALCULAIONS: determine the average unit normal vector at polygon vertex (the common vertex shared by mutliple polyugon) by averaging the surface normals of all polygons sharing that vertex thus for any vertex positon V: Nv = summations of Nk/(magnitude fo summations of Nk) apply an illumination model to each vertex to calculate the vertex intensity linearly interpolate the vertex intensitites over the surface of the polygon >Draw a scan line that touches two edges (find intensity at edge points then inside) >the intensity at the intersection of the scan line with a polygon edge is linealy interpolated from the intensitites at the edge end-points. I(some pont on edge) = (diff of soem pont-edge point which ever gives value to I1)/(diff of edge end points)I1 + I2 I’(at another scan line) = I(at one scan line) + (I2-I1)[end points]/(y2-y1) along horizontal we have Ip = same way same incremental can be applied along horizontal >highlights on surface may be displayed with anomalous shapes, Mach bands

    Phong Shading: is to interpolar normal vectors and then apply the illumination model to each surface point, greatly reduces mach-band effect >Process: >Determine the average unit normal vector at each polygon vertex … >linearly interpolate the vertex normals over the surface of the polygon >draw a scna line that touches two edges >find interpolated value of normal at the point N(some point on edge) .. can use incremental by N’ = … >same along horizontal

CURVE MODELLING: >a spline is a flexible strip used to produce a smooth curve through a designated set of points. Mathematically a piecewise cubic polynomial function whose first and second derivative are continous across the various curve sections is spline curve. In graphics spline curve is any composite curve formed with polynomials sections satisfying special continuity conditions at the boundary of the pieces. A spline surface can be described with two sets or orthogonal spline curves >We specify a spline curve by giving a set of coordinate positions called contorl points which indicates gneeral shape of the curve. Using those points, fitten with piecewise continous parametric polynomial functions in one of two ways: interpolate (passing through all), approximate >Convex polgon boundary that encloses a set of control points is called the convex hull >polyline connecting the sequence of contorl points for an approximation is to remind designeer of the contorl point ordering called control graph

>To ensure smooth transition from one section of a piecewise parametric curve to nextt, impose parametric continuity conditions:
	for each sectio of spline: x = x(u), y = y(u), z = z(u); u1 <= u <= u2
	
	i) zero-order parametric continutiy(C^0): curves simply meet
	ii) first order(C1): that first parametric derivatives (tangent lines) of coordinate functions in above eqns for two succesive curve sections are equal at their joining point
	iii) both first and second parametric derivatives are same at the intersection
>geometric continuity
	i)zero: same as para zero
	ii) first: parametric first derivatives are proportional at intersection of two sections (direetion of tangent not necessary magniteusdde)
	iii) second: same way
	
>Hermite splines: (parametric cubic curve)
	if P(u) represents parametric vector cubic function at a section Lk-L(k+1)
		P(u) = au^3 + bu^2 + cu + d  (0 <= u <= 1) where a,b,c,d are vecotrs
	with boundary condition
		P(0) = pk
		P(1) = pk+1
		P'(0) = Dpk
		P'(1) = DPk+1 (slopes at the points)
	In matrix notation:
		P(u) = [row u^3 u^2 u 1][column a b c d] where a,bc,d are vectors
		P'(u) = [3u^2 2u 1 0][column a b c d]
	Then the boundary condtion becomes:
		[column pk pk+1 Dpk Dpk+1] = [matrix] [column a b c d]
		where
			matrix = [0 0 0 1]
				[1 1 1 1]
				[0 0 1 0]
				[3 2 1 0]
	Finally
		P(u) = [u^3 u^2 u 1]*inverse matrix*boundary values
		
>Beizer curves:
	Given n+1 control points the beizer curve is directly gien by:
		P(u) = sum(0 to n)pk BEZ_{k,n}(u); 0 <= u <= 1
		where,
			BEZ_{k,n}(u) = C(n,k)u^k(1-u)^{n-k} (polynomial of a degree one less than no of control points)
		(expansion into paramteric form)
	>always passes through first and last point so
		p(0) = p0
		P(1) = pn
	>paramteric first derivates at endpoints is:
		P'(0) = n(p1-p0) along the line joining first two points
		P'(1) = n(pn+1-pn) along the line joining first two points
	>lies within the convex hull